Are Killer Robots Better Soldiers?: The Legality And Ethics Of The Use Of AI At War
Killer robots are not just science fiction anymore. Even if their use remains sparse, machines able to identify targets and kill have been around for more than a decade. Many countries, including China, the US, the UK, Israel, Russia, South Korea, Australia, and Turkey are already investing, or planning to invest, heavily in these technologies – bringing about a radical shift in the future of warfare.
While using algorithms and AI to kill certainly has strategic benefits and advantages, for example it spares soldiers and saves resources, it also raises some important legal and ethical concerns. For instance, removing human decision from the battlefield challenges principles of accountability and mutuality of risk. Killer robots could also lead to conflict escalation, they could transform into weapons of mass destruction, or worse – become unpredictable.
WHAT ARE KILLER ROBOTS?
The technical term for killer robots is Lethal Autonomous Weapons Systems (LAWS). Still today, there is no proper definition of LAWS, but most entities define them as weapons that locate, select, and engage human targets without human supervision – with “engaging” meaning killing. They are machines that can perform a military task and make decisions on their own, using AI. In contrast, nuclear or chemical weapons are not autonomous, as they have no intelligence, and cannot be programmed.
Different organisations have tried to further sub-categorise LAWS by the level of human involvement in their operation. Human Rights Watch (HRW) distinguishes between “man-in-the-loop” weapons, delivering force exclusively under human oversight; “man-on-the-loop” weapons, allowing human intervention; and “man-out-of-loop” weapons, discarding human intervention.
Similarly, NATO classified (in the context of Unmanned Aerial Vehicles (UAV)) “remotely controlled systems,” which have complete obedience to human command; “automated systems,” which only react according to a program (used in defence for example); “autonomous non-learning systems,” which have a set of fixed functionalities and goals and they act towards a specific goal (as opposed to a specific action) and have a spectrum of autonomy on how to perform, but cannot learn; and “autonomous learning systems,” acting according to sets of rules which adapt and improve over time.
ARE KILLER ROBOTS IN USE TODAY?
Killing machines of the “remotely controlled systems'' or “man-in-the-loop” categories are undoubtedly in use today. Recent drone strikes in Iraq and Afghanistan – piloted by the US army from a distance – are an example.
More autonomous technologies such as the Israeli RoBattle combat system and Sentry Tech border gun machine are currently used in defence. They create military intelligence, carry surveillance, choose targets, but remain human-supervised in their firing function.
Another defence robot, the SGR A1 developed by Samsung entered into operation in 2014 in the Korean demilitarised zone. It has surveillance, tracking, firing, and voice recognition functions. The machine currently operates with human supervision but has the capacity to act on its own.
Similarly, the Harop “suicide drone” developed by Israel Aerospace Industry (IAI) can function keeping a man in the loop or completely autonomously. Its use by Azerbaijani forces against Armenia has been reported from 2016 onwards, suggesting it was at times activated without human supervision.
More recently, in March 2021, the UN reported that Libyan governmental forces used STM Kargu-2 against militia fighters. The 7kg quadcopter, equipped with a facial recognition system, would have fired in the absence of human supervision.
ARE KILLER ROBOTS LEGAL?
The answer to this question is unclear.
Following a long history coming from ancient Greece, it is legal to kill in times of war, if certain rules of honour are respected. Some of these principles are still binding on military actions today under International Humanitarian Law (IHL).
Notably, the Convention on Certain Conventional Weapons (CCW), also known as the Inhumane Weapons Convention, prohibits the use of weapons causing unnecessary suffering and superfluous injuries (such as blinding laser and chemical weapons) or weapons that have an indiscriminate effect (such as cluster munitions and interpersonal mines who do not have a particular military objective and can injure civilians).
By their very nature, LAWS are unlikely to cause unnecessary or superfluous injuries – if anything, they have been said to be quick and precise. But, they could be considered indiscriminate, as they are unable to comply with the customary rules of taking precautions regarding the effect of attack, distinguishing between civilians and combatants, and considering the proportionality of the attack.
We might indeed wonder whether a programme able to learn can have the necessary comprehension of a situation to evaluate if something or someone is targetable – especially so in modern wars which involve combatants looking like civilians. Similarly, would a programme be able to adapt proportionately to a sudden change of situation signalled by behaviour? A failure to recognise someone 'hors de combat' or surrendering, for example, would be considered unlawful killing.
LAWS and AI advocates would probably argue that emotionless machines are better at these complicated tasks, by eliminating human error. However, this is yet to be proven. On the contrary, mistakes by drones keep being reported, while academic research demonstrates that using superior technology on the battlefield does not guarantee neither victory nor adequate civilian protection.
Besides, removing humans from the equation poses a problem of accountability. By the very nature of their tasks, LAWS are likely to commit fatal mistakes, but, unlike humans, they are also very unsuited to be accountable for them. Whether under IHL, human rights law, criminal, or civil law, it will be difficult to determine the person responsible for the robot’s actions and hold them accountable, especially in cases where robots are able to make their own decisions.
Not only does this nullify the deterrent effect of the law (it is as ridiculous as trying to explain to your dishwasher that you will slam its door if it misbehaves), but it also nullifies the moral function of accountability: distributive justice for the victims or their families.
A FAILED ATTEMPT TO REGULATE LAWS
In December 2021, the 6th review of the CCW was held in Geneva, to establish new rules on the development and use of LAWS. For the first time in 8 years of discussion, a majority of the nations party to the Convention were in favour of restriction (80 out of 125 nations, with 30 advocating for a specific LAWS ban treaty). They were opposed by a minority of states (controversially the biggest manufacturers of LAWS) including Russia, the US, Israel, and India.
Despite a majority consensus, the conference failed in providing a response to the need for regulation. It concluded on an ambiguous mandate for states to “consider proposals and elaborate, by consensus, possible measures, including taking into account the example of existing protocols within the convention, and other options related to the normative and operational framework on emerging technologies in the area of lethal autonomous weapons systems”.
THE MORAL QUESTION
The ethics of AI is the subject of a long-standing dispute. The fear of machines taking over the human race is not new or particular to the context of war: Elon Musk claims that AI could be “far more dangerous than nuclear weapons,” while Stephen Hawking stated it “spells the end of humanity”.
Without diving into the endless debate on whether this fear is justified or not, a few points in relation to LAWS can be made.
Firstly, it seems that mutuality of risk is a presumed part of armed conflict. Studies conducted on drone pilots show there is a phenomenon of moral disengagement in war on the attacker’s side. One pilot testifies: “I thought killing somebody would be this life changing experience, then I did it, and was like ‘all right, whatever’ (…) Killing people is like squashing an ant. I mean, you kill somebody and it’s like ‘all right, let’s go get some pizza”. Some other expresses: “This was a weird feeling…you feel bad. You don’t feel worthy. I’m sitting here safe and sound, and those guys are in the thick of it, and I can have more impact than they can. It is almost like I don’t deserve to be safe”. Not that this makes any difference to the person killed, but something feels uncomfortable and undignified about such imbalance of risk.
Humanist considerations aside, the absence of threat over the troops also poses a risk of conflict escalations. Distance with the battlefield reduces aversion to initiate new conflicts, it facilitates the decision to go to war and denigrates negotiation and diplomatic powers. Besides, a robot could accidentally start a new conflict or aggravate ongoing conflict by an unnecessary use of force.
Furthermore, a machines-against-machines war would make battlefields uninhabitable for humans. All burdens of war would fall onto civilians as the only people present in the area of conflict. It does not mean there would be more civilian casualties, but it means that civilians’ lives would be more at risk than soldiers’ lives. Again, a disturbing idea.
Finally, there is a sensible risk of proliferation. LAWS are cheap to construct, easy to transport, and hard to detect – meaning they are extremely scalable. And contrary to conventional weapons, they do not require an entire army: a single person can activate a swarm of killer robots. This means they could potentially carry out targeted mass attacks and achieve ethnic cleansings in no time. For that precise reason, some qualify LAWS as weapons of mass destruction.
The ICRC (which is mandated to promote the correct application of IHL), the UN, the EU parliament, 30 countries, hundreds of civil societies organisations, about 4700 AI experts, and the majority of the public are all in favour of restricting the use of LAWS.
On the other hand, a handful of states investing in the technologies unsurprisingly oppose the regulations instead arguing in favour of technological advancement and worshiping speed and precision. However, in the military context, precision is not always about the accuracy of the strike, but about the ability to evaluate and strategize in order to minimise civilian casualties, which comes with equivalent responsibility.
As Christoff Heyns says, there are two separate questions to be brought up on the case of killer robots: “Can they do it?” and “Should they do it?”. With rapidly advancing technology, future generations of robots might well surpass human soldiers, but it does not mean that we should allow them to.
Coralie is a former art director and a recent LLM graduate from University College London, dedicated to fight for the respect and protection of Human Rights.