Unsurprisingly, militaries in developed nations are rushing to adapt new technologies for lethal purposes. But what are the ethical implications of this? Are there some technologies we ought not to use? Or conditions only under which they are morally permissible to use? Four notable areas where there is controversy or lack of ethical understanding are: lethal autonomous weapons systems; cybersecurity; mass surveillance; and espionage.

 

It has been argued lethal autonomous weapons systems ('killer robots' - of which the nearest contemporary form is drones) ought never to be deployed. The reason is that it is in principle impossible to assign responsibility for the killings they conduct. It can't be the dumb robot. Nor can it be the military commander who deployed the drone. Nor can it be those who designed it to be autonomous (see Sparrow 2007). This is unproblematic if those killed have forfeited the right to life; it is problematic if they have not.

So who is to blame when killer robots run amok? In this project I show how the moral problem is resolved. A regulatory structure can create moral responsibilities on the basis of positive law. The trilemma is thus false, because there is a fourth option. The 'responsibility gap' can be crossed.

See Tom Simpson's opinion piece in The Conversation on "Killer robot drones". He has also written an opinion piece for The Daily Telegraph, "Will killer robots be the kalashnikovs of tomorrow?".

Read the policy memo on "Killer robots: Regulate, don't ban"