Algorithmic moral control of war robots : philosophical questions
Full recordShow full item record
AbstractIn a series of publications, Ronald Arkin and his team have proposed the concept of a deontologically programmed 'ethical governor,' which is supposed to effectively control and enforce the ethical use of lethal force by robots on the battlefield. This paper attempts to analyse the concept of an ethical governor in the context of a more general criticism of algorithmic robot morality implementations. It is argued that the metaphor of the ethical governor is dangerously misleading in multiple respects: the governor, as proposed by Arkin, overlooks a fundamental clash of interests of the robot designer/operator, that is not present in the original governor, and that can be shown to make effective robot control in the proposed implementation impossible. The concept also suggests that ethics control is a matter of correcting behavioural deviations from a 'reference ethical action' by a negative feedback loop, although it can be shown that this does not lead to an appropriate description of moral behaviour, and that in particular it overlooks the central role of conscience and dissent in morality. Finally, the concept as proposed is based on a fundamental confusion of the properties of laws, rules of just war, terms of engagement, and moral rules. At the same time, experimental implementations of 'moral' robot controllers threaten to produce an ad-hoc regulation of ethical issues on the battlefield, which is removed from public scrutiny and democratic control. Considering these issues, the concept of an ethical governor, and, more generally, the existing attempts to handle robot morality by algorithmic means can be shown to be both misleading and dangerous, and to not appropriately address the moral problems they are supposed to solve. Consequently, such attempts in their present form must be questioned and re-examined, and a more critical approach to artefact morality should be adopted.