Over 80 percent believe there’s a medium to high chance of things going badly wrong with Artificial Intelligence (AI) according to a poll of concerned scientists conducted by Scientists for Global Responsibility, while 96 percent say AI needs more regulation, and 82 percent thought that AI was most likely to create a dystopian rather than a utopian future.
Media release: 3 November 2018
A new briefing and survey, Artificial intelligence: how little has to go wrong? Autonomous weapons, driverless cars and friendly spies in the home (download available) - explores a range of both obvious and more subtle threats and raises stark warnings from across scientific disciplines. It looks at questions such as, is artificial intelligence evolving quicker than the regulation to manage it? And, are we sufficiently able to imagine what major problems may emerge, and can AI even be effectively regulated?
The briefing, published to coincide with the annual Responsible Science conference in London on 3 November, 2018, organised by SGR, questions the current introduction of AI in these terms:
‘Autonomous mechanisation and machine learning are being insinuated already, often barely noticed, into the world around us. But is the science on tap or on top? Are we in control, or playing a kind of technological Russian roulette in which we spin the chamber of autonomous learning and / or decision making by machines without being fully in control of what happens when the trigger is pulled? The weaponised metaphor is not hyperbole, because one of the most controversial issues surrounding AI is in the field of warfare and military technology. Applications are, however, emerging and being introduced in places ranging from on the road with driverless cars, to in the home and workplace with digital assistants, and from farm to hospital.’
The poll was conducted amongst SGR’s membership and wider supporter base, as professionals for whom the emergence of AI, automation and machine learning will have direct implications. The membership is drawn from diverse fields of science and social science, engineering and technology. Around half are from the natural sciences, such as physics, chemistry and biology, with the next largest group being in engineering and information technology, and 10% in the social sciences.
We asked a range of questions designed to reveal the type of consequences most likely from the spread of AI under current conditions. Highlights of the survey include that:
- 94% believed that AI would give corporations more power over citizens than vice versa and the same proportion, 94%, believed that this would result in corporations benefiting more from the introduction of AI than citizens
- 88% of respondents, from a survey sample with a high level of technical literacy, said that the prospect of the greater deployment of AI made them feel less in control of their lives
- 83% said the ability of those designing and introducing artificial intelligence to predict the fullest likely range of its consequences was either ‘poor’ or that predicting was ‘not possible’
- Under current regulatory arrangements, 70% thought that ‘not much’ or ‘very little’ would have to go wrong for AI to cause significant harm
We then asked people how likely they thought it was that things might go badly wrong with the introduction of AI into three particular applications:
- With driverless cars, 84% thought there was a medium or high chance of things going badly wrong
- For digital assistants 82% thought there was a medium or high chance of things going badly wrong
- Where autonomous weapons were concerned 97% thought that there was a medium or high chance of things going badly wrong
Respondents were most clear about what needed to happen from now on in the light of the other answers:
- Asked if more, less or no change was needed in regulatory unambiguous 96% thought that AI needed more regulation
Before finally asking people what they considered to be the biggest issues raised by AI, we asked whether they thought that AI was likely to set us on a positive course of improvement or more likely to head the other way:
- Among respondents, 14% thought the future would be unchanged, and 5% were persuaded that a utopian future awaited us. But 82% thought that AI was most likely to create a dystopian future
“We’ve asked the question ‘AI – how little has to go wrong?’ The concern is that not much needs to before there are serious consequences. Lethal mistakes have already been made,” says Andrew Simms, lead author of the briefing, “The challenge now is to create the conditions in which things are most likely to go right. At the very least there should be a ban on the development and deployment of autonomous weapons, and the UK government should support this through the UN Convention on Certain Conventional Weapons. Secondly, in order to help create the conditions for an effective regulatory framework, which currently does not exist, 20% of AI research and development budgets should be spent on assessing potential benefits versus potential harm.”
If these survey findings seem strong, they are mild compared to the warning made by SGR’s late, long-term patron, Prof. Stephen Hawking, who warned in 2014 that, “The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate... Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
Note: membership and survey
At the time of writing, the organisation had 750 members, of whom 16% were ‘associates’ who have concerns about ethical issues in science, design and technology, but not necessarily a related professional background. There were 82 respondents to the poll, 85% of which were actual members, representing just under one in ten of the membership, and 15% broader supporters.