Autonomous Decision-Making: Assessing the Technology and its Impact on Industry and Society

01 Nov 2017

Do computers make better decisions? If so, what will be the impact on business, society and on ourselves? What conditions would make autonomous decision-making acceptable? At a recent workshop organized by the ETH Risk Center together with the Swiss Re Institute, leading experts from academia and industry discussed the current state of the art of machine learning technologies for autonomous decision-making, together with over a hundred participants.

Autonomous Decision-Making: Assessing the Technology and its Impact on Industry and Society

“The need for more transparency” was one of the key reasons for the workshop. Transparency, not only in the technical aspects of the algorithms – this is limited up to a certain point – but also more transparency about the driving force behind the algorithms. Participants agreed that more transparency would be vital to build the level of trust needed to accept and integrate the technology, especially for autonomous decision-making. For the near future, an ideal scenario would be to have hybrid decision-making systems, for example, running computer-based systems in parallel to human systems and comparing them.



AI as "a collective weird thing"?

The first part of the event assessed the scope and limits of the technology. In his opening talk, ETH Professor Thomas Hofmann, defined intelligence as the ability to understand or to make sense and to act accordingly. Characterising intelligence as the "crown of evolution" with the ultimate goal to get more intelligence, on this journey, we, as humans might only be an intermediate product. Machine intelligence as such is not bound to mimic just human skills. Far from that, Hofmann expects that machines will not stop at the human level but will enter into new dimensions of intelligence, making use of a networked intelligence. With our knowledge of today, we may only think about AI in the future as a “collective weird thing”. In terms of computer vision, machines have already achieved forms of autonomy by perception and super human face recognition. Combining this with recent achievements in language understanding, like human voice recognition, machine reading by linking text with knowledge representation allow some form of reasoning across billions of documents. Recent developments in machine translation and more even more recently in reinforcement learning show the enormous potential lying ahead of us.

In the second talk, Prof. Thomas Hills from Warwick’s Department of Psychology showed that interactions in social media, for example, clicking, actually means “acting” and by using these platforms, or processing the information from social media, we, in turn, change our personality, too. Therefore, social information is a huge influence on our personality and the algorithms are co-evolving along these lines. This creates a feedback process which, most importantly, might also amplify our biases. Social risk amplification is one example. According to Hills, we must be aware of the fact that whenever we create data (with our biases in it), and have algorithms processing these data, the algorithm takes over our biases. When we want to build trust to using algorithms for decision-making, he pointed to the psychological fact that people want narratives for doing things. In the end, we need reasons to justify an action. Reasons are only deductible from the system if it has a certain level of transparency about itself and about the mechanisms in which it is placed.

In his talk, Prof. Christoph Hölscher, Chair of Cognitive Science from ETH Zurich added that machines today may “understand” a lot already but in most situations this understanding is very context specific and, whenever the context switches, the machine gets into trouble. In terms of human interaction and feedback processes in between, the computer also helps to understand what is going on in the human mind. Concerning the acceptance of autonomous decision systems, Hölscher pointed to the psychological fact that people want to predict, understand and be in control and do not want to be limited in their actions. He also warned of active filtering by an algorithm, e.g. refraining information on a specific person in a social network, might lead us to the conclusion that this was the willing action of the specific person. However, it might have been the algorithm who “unfriended” us from the person. He underlined that we have still problems to separate human action from computer based algorithmic action and more transparency is also necessary here.

Nico Lavarini, Chief Scientist from Expert Systems gave insights into state of the art integration of AI in insurance business processes like claims management and contract matching, as well as property and risk evaluation. Reduction of subjectivity and time efficiency, as well as cost saving are favourable outcomes of the integration in cases when the scope of the problem can be framed accordingly. However, a global quality evaluation of the success of the integration is complex, due to cognitive biases because of variable inter-rater agreements, sometimes about only 60%. Therefore, we should be aware of how much efficiency we can actually expect to achieve using technological innovations, which are ultimately based on human input.   

A painful learning phase for the industry

To close the first part, Prof. Patrick Cheridito, Chair of Mathematics at ETH and Member of the Risk Center, hosted a panel discussion. Martin Schürz, the Head of Engineering Services at Swiss Re, acknowledged that the “learning phase” was painful and is still ongoing. However, the acceptance of the technology has to be increased. He underlined that more transparency would be necessary in the first place, but after a while, when a certain level of trust is reached, people will stop caring about explanation. Still more effort has to be invested in understanding the specific nature of problems, in order to make them “solvable” by the computer. Taking into account that business has only limited capacity of time and money, spending time and investing effort on trying to solve, or to frame, one single problem with an uncertain outcome is also risky in itself. Olivier Verscheure, Executive Director of the Swiss Data Science Center gave insights into their new platform to deliver the necessary infrastructure and knowledge to solve problems. Asked about future perspectives, Prof. Hills suggested we ask ourselves: What do we really need? What improves our lives, and how do we create a product to increase wellbeing?   


Historical perspective – is it really different this time? 

Live demonstrations of ETH Spin-offs featured in the lunch break, followed by the second part of the conference, on society and the future of work.

Daniel Castro from the Information Technology and Innovation Foundation (ITIF), Washington, framed the impact of the “algorithm economy” on industries, firms, and workers. Putting the recent debate in a historic perspective he revealed that the impact of technology on workers has been always there. Although the slogan “this time is different” is attractive now, historically, we observe a stable environment. Many times in the past, he added, people expressed their fears that automation will eliminate workers. However, none of these cases became a reality that matched those fears. In contrast, technological innovation always boosted productivity and created new tasks for millions of new jobs. As such, according to the ITIF, AI is expected to create 5 to 6 trillion dollars annually by automating knowledge work, and much of AI will boost quality, not eliminate jobs. Inevitably, some jobs will be eliminated, but most occupations, like brick masons, machinists, dental laboratory technicians, social science research scientists, firefighters, are still very difficult to automate. From a macro-perspective, Castro explained, developed countries need higher productivity to maintain the current standard of living. For example, the EU working age-to-older person ratio drops from 3.5 to 2.2 by 2040. In turn, productivity would have to increase by 13% to keep worker after-tax incomes from declining. Governments have to provide the efficient framework for allowing innovation to happen and not be hindered in the end.

Sarah Spiekermann


An IT overdose – ensuring that technology is for human value

Prof. Sarah Spiekermann from Vienna University of Economics and Business talked about ethical system engineering. She explained that, when taking into account technological innovation, all that matters in the end is human well-being and that people should be better off through technology. Technology should not just improve business processes quantified by a questionable metric of success. According to Spiekermann, European countries see a downturn of total factor productivity despite ongoing digitalization. This is reflected at firm-level by a negative IT utility.  Spiekermann asked what is our current state of well-being? Have we increased our utility since 2000? We always have a choice, a choice of engineering, and we need to be clear about what we are losing. According to Spiekermann, a value-based IT design is necessary to frame system engineering standards. Of course, outcomes in technology are hard to anticipate and the negative effects are hard to come by in advance, but an upfront assessment is the only real choice. Spiekermann explained that such an upfront assessment could be based on different moral concepts, like a utilitarian or deontological-based ethics. In addition, basic human rights also need to be incorporated in such a design standard. Finally, it is also important to put the technology into the specific context where it is used. Usually, IT companies, by providing only the technology, have no idea of the context. We need to be more context sensitive and be more transparent about the uses and drivers behind the technology.

More transparency – revealing the algorithm's driving force

In the final panel discussion, moderated by Prof. Hans Gersbach, Chair of Macroeconomics: Innovation and Policy at ETH, and a founding member of the Risk Center, posted the question of how we might use automation for public tasks. Failures of the system, like corruption, would become much more difficult using a computer based intelligent system. Nevertheless, governance requires a lot of trust, and that is still lacking for autonomous, computer-based systems. For Daniel Castro, a fully automated system is very far off. We will always have humans involved. But while he admits that the “computer world” is not be perfect, neither is the “human world”. In terms of supervision, Nina Arquint, Head Group Qualitative Risk Management at Swiss Re, stated that companies must take responsibility for themselves, or they will lose customers – the biggest asset in a business. In some sense, supervision lags always behind and, according to Arquint, the information gap between supervision and business is increasing. With that in mind, she added, supervision has no other choice than to use the technology just to keep pace.


Assessing the benefits of AI through hybrid decision-making 

Prof. Hofmann admitted that to find a bias in a machine learning system is very hard but to find and eliminate a bias in an organization is also difficult. Fixing the problem might be even easier for the algorithm than to train humans to change their behaviour and eliminate a bias. We are still in a “start-up” phase – if things go well we will live in a better world. Prof. Spiekerman is more sceptical: Machines so not deliver the perfect world of an “objective” system. One also has to ask, "Who is sitting behind the machines? Big corporations?" To build up trust in the system, Hofmann suggested running both systems in parallel – as far as possible – and then compare the outcomes. This hybrid decision-making might reveal the benefits and drawbacks of each decision-making system.

Summary of the ETH Risk Center and Swiss Re Institute's conference on Autonomous Decision-Making in October 2017. Written by Dr. Bastian Bergmann, Executive Director, ETH Risk Center.

We use cookies to gather information that will help us provide the best possible service. By using this site, you are accepting our cookie policy.

In our ongoing efforts to improve the quality and relevance of our publications, we would like to know more about you.

* required fields

Interested in subscribing to our content? Visit our subscription page.