Prof John Finney argues that we must act to prevent the ‘morally repugnant’ prospect of machines with the power and discretion to take human life.
Article from Responsible Science journal, no.1: online publication: 11 June 2019
Robots have been with us for a long time. The first traffic light system was set up in Parliament Square 150 years ago in 1868 by railway signals engineer J. P. Knight, who used moving semaphore arms, with red and green lights for night-time operation. Its life was, however, limited: following a gas leak, there was an explosion and a policeman was injured. The first automatic traffic lights, operating with fixed time intervals, were installed in Wolverhampton in 1926, while the first vehicle actuated signals were installed at the corner of Cornhill and Gracechurch Street in the City of London.
As technology has advanced, robotic systems are being used in an increasingly wide range of applications throughout society. This wider application raises significant ethical issues. Industrial robots have been used for many years, and service robots in the home – for example robotic lawn mowers and vacuum cleaners – are being increasingly used to free us from activities that are often seen as domestic chores. Robotic systems are also increasing in healthcare, child and care of the elderly. As computing power continues to increase, and so-called artificial intelligence (AI) techniques are implemented, self-driving vehicles become a possibility, both for civilian use and military application in unmanned aerial, surface and submarine vehicles (‘drones’). The prospect of fully autonomous weapon systems looms in the not-too-distant future.
In general, using a robotic system puts an intermediary device between the ‘user’ and the outcome of the robot’s action. This raises questions such as:
- How does the intermediary affect our legal and ethical responsibilities?
- How might this change with the complexity of the intermediary technology (which may ultimately lead to full autonomy of the intermediary)?
- If our responsibilities are reduced in some way by the presence of the intermediary, who or what takes them on?
- How might these changes influence other externalities?
Focussing on military robotic systems, we have a number of legal instruments that should be considered. International Humanitarian Law (IHL) applies to actions during armed conflict, while Human Rights Law applies otherwise. The UN Declaration of Human Rights and the EU Charter of Fundamental Rights are also potentially relevant. All these instruments were devised many decades ago when technology was much less developed. For example, the 1949 Geneva Conventions were not written with computers in mind – the big invention of the year was the 45 r.p.m. gramophone record! In 1977, when the Additional Protocols were agreed, the PC was in the early stages of development (some of us will remember the Apple II and the Commodore PET), and the world’s information and communication technology capabilities were many orders of magnitude less than today.
Central to IHL are the principles of distinction (for example, between a combatant and a civilian), proportionality (the action should be proportional to the perceived threat) and accountability (responsibility for the action taken). So we need to consider how these principles fare when the actions are controlled remotely, and when the actions are undertaken autonomously by the weapon itself. In principle, we need to:
- Scrutinise the mapping between the applications of new technologies and current laws and customs of war
- Try to understand how these can or cannot be followed in the light of these – and likely future – developments
- Suggest a way forward for developing a set of ethical principles relating to the development and use of modern robotics in warfare.
A recent report by the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST)1 considered these questions, as well as the ethical implications of robotics in the non-military situations mentioned above. COMEST itself is an advisory body and forum of reflection set up by UNESCO, mandated to “formulate ethical principles that could provide decision makers with criteria that extend beyond purely economic considerations”.
In its deliberations, COMEST made the distinction between deterministic and cognitive robots. The actions of the former are controlled by a set of algorithms whose actions can be predicted. In contrast, cognitive robots, which can learn from experience, from human teachers and potentially on their own, can develop an ability to deal with their environment on the basis of what has been learned. Compared to ‘traditional’ deterministic robots, cognitive robots can make decisions that cannot be predicted by a programmer.
This distinction is important. The behaviour of the deterministic robot is determined by the program that controls its actions. Responsibility for its actions is therefore clear, and regulation can largely be dealt with by legal means. In contrast, a cognitive robot’s decisions and actions can be only statistically estimated, and are therefore unpredictable. Its behaviour in environments outside those it experienced during learning are in essence
‘random’ and can be potentially catastrophic. So assigning responsibility for the actions of what is partly a stochastic machine (subject to random actions) is problematical.
COMEST’s recommendations used a framework of ethical values and principles based on the common thread of Human Responsibility. It included the concepts of human dignity, interdependency (human, animal, environment), privacy, do no harm, responsibility (liability, transparency, accountability), beneficence (proportionality, cultural diversity) and justice (equality, non-discrimination).
With respect to remotely piloted armed robotic systems, the report notes that these have given society the ability to wage war remotely, and so threatens to change fundamentally the nature of armed conflict. They raise legal and ethical issues that States have so far failed to address. For example an attacker can kill an adversary without threat to him or herself, targeted killing removes the right to justice, and remote killing contravenes the
principle of human dignity. In summary, the report concludes:
- In addition to legal issues, there is a strong moral principle against an armed robot killing a human being;
- States should reconsider using armed drones in conflict situations, as they have done for e.g. anti-personnel mines and blinding laser weapons;
- Unless action is taken soon, the future prospect is of continuous remote conflict and justice-denying targeted killing.
On autonomous weapons, COMEST concluded that legally, their deployment would violate International Humanitarian Law, and ethically that they break the guiding principle that machines should not make life or death decisions about humans. They lack the technical capability to ensure compliance with the principles of distinction and proportionality. Moreover, the authority to use lethal force cannot be legitimately delegated to a machine
– killing must remain the responsibility of an accountable human. The overall recommendation was that for legal, ethical and military-operational reasons, human control over weapon systems and the use of force must be retained.
In conclusion, although the future prospect of robotic warfare is chilling, this is recognised in some of the highest quarters. In his September 2018 speech, the UN Secretary General commented that “The impacts of new technologies on warfare are a direct threat to our common responsibility to guarantee peace and security”.
As he also said: “Let’s call it as it is. The prospect of machines with the discretion and power to take human life is morally repugnant.” Scientists are not alone in having a responsibility to try to prevent these possibilities becoming reality.
Prof John Finney, Department of Physics & Astronomy, London Centre for Nanotechnology University College London, and British Pugwash.
Reference
1. COMEST (2017). Robotics Ethics. https://unesdoc.unesco.org/ark:/48223/pf0000253952