Human Rights Pulse

View Original

How The Rules Of Customary International Humanitarian Law Fail To Govern Autonomous Artificial Intelligence In Warfare

The emergence of Lethal Autonomous Weapons Systems (LAWS) has been a cause of concern for many in recent years. The UN defines LAWS as weapons that “elect, locate and engage targets without human supervision” ; essentially allowing the weapons to operate unattended for a period of time. This has received wide criticism, as may be evidenced by the UN Secretary General Antonio Guterres description of the weapons as “morally repugnant,”  as well as the chairman of BAE Systems Sir Roger Carr’s  concerns about the automation of decisions to kill. Additionally, Google recently announced that it would not “design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. These concerns are centred around the fact that while LAWS provide some military advantages, they also present an opportunity for accountability gaps, conflict escalation and general threats to national security.

CONDITIONS UNDER INTERNATIONAL LAW

Two rules of customary IHL regulate the type of weapons deployed and their use. A weapon will be prohibited under these rules if it can be shown to cause “unnecessary suffering” or be “inherently indiscriminate”. However, both tests set a high threshold, meaning that it is unlikely that LAWS will be subject to an outright ban. 

In the first rule, “unnecessary suffering” does not exclude the possibility of extreme suffering, as long as it assists a military purpose. For the test to be satisfied, ‘unnecessary suffering’ to civilians would have to be present in all instances, starting from the design of the weapon, and would not be limited to chance occurrences. There is no feature inherent to the design of LAWS which suggests that they would cause unnecessary suffering in all instances of their use. In many cases, due to improved accuracy, one may argue that they strengthen the correlation between suffering caused and the proposed military purpose. 

In relation to the second rule, weapons are only considered “indiscriminate” when their effects cannot be contained, as with balloon bombs and biological weapons. Human Rights Watch produced a report in 2012 warning that LAWS may be in danger of becoming inherently indiscriminate. However, U.S Defence Attorney Charles Trumbull has more recently argued that “AI, combined with the development of sophisticated sensors, has made many weapons more discriminate”. This is questionable, given the high likelihood of tragic errors due to malfunctions on a larger scale. However, at this stage, arguments for improved accuracy mean there is no likelihood of an outright ban on LAWS. In light of this, a sharper focus is needed in assessing whether existing legal frameworks are sufficiently watertight to regulate the use of LAWS,  and thereby protect the rights of civilians.   

 COMPLEXITY IN REGULATING AUTONOMOUS WEAPONS

The first problem with regulating LAWS under existing legal frameworks lies in assessing proportionality. The principle of proportionality, codified in article 51(5)(b) of Additional Protocol I prohibits “any attack which may be expected to cause incidental loss of life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated”. Bombing an entire building in order to kill one sniper, for example,  would be disproportionate, due to collateral damage outweighing any military advantage. 

Two problems are raised by LAWS in this context. Firstly, clarity is required about the assessment at the “time of attack”. It is unclear whether “time of attack” refers to the moment when the weapon is deployed, or the moment when the anticipated use of force occurs, which might be days or weeks later. This raises the second problem. If the time of attack is separate from the moment when the weapon is deployed, how can commanders “reasonably determine” collateral damage and military advantage that “may be expected” to result from an attack, given their limited insight into the eventual attack? The fact that LAWS, by definition, carry out attacks without human supervision threatens to undermine the key principles of IHL, which are founded on an assumption of human assessments of proportionality. The principle of proportionality is essential in determining whether an attack was unjustified and in delivering justice for victims of war. The nature of LAWS’ deployment essentially muddies the waters and makes this a vague and nebulous exercise.

The second major problem with regulating LAWS is the accountability gap. Accountability facilitates the punishment of unlawful acts and aims to deter the commission of future ones. It gives victims the satisfaction that there has been retribution and promotes reconciliation. International humanitarian law and international human rights law mandate personal accountability for war crimes. Given the importance of accountability for the delivery of justice in times of warfare, how are we to attribute blame to a machine? These weapons have the potential to commit criminal acts for which no one would be responsible. Any attempt to accord responsibility to human operatives or manufacturers is thwarted by their distance from the criminal act itself. Furthermore, even if the law was amended to encompass such machines, a judgment would not fulfil the purposes of punishment for a victim of a war crime. A machine cannot be deterred nor be “punished” in any meaningful sense.

IMPLICATIONS FOR THE FUTURE

The autonomous functioning of LAWS raises a host of legal and moral issues. Given that an outright ban under IHL is unlikely, the loopholes of existing legal frameworks threaten to frustrate the delivery of justice in times of conflict. In particular, the assessment of proportionality is based upon an assumption of human decision-making. Similarly, the foundational principle of accountability requires a human capable of having moral responsibility. This is not a simple case of tweaking existing frameworks to adapt to new technologies. In removing human agency, LAWS threaten to confound the central principles of international humanitarian law and international human rights law. This has concerning implications for civilians, who could become victims of disproportionate attacks and war crimes that go unpunished.  

As indicated within this article,  hopes of an outright prohibition of LAWS at present seem optimistic. Although they present a range of threats to human rights in times of conflict, they do not have any characteristics that lead to an automatic ban under IHL. A more realistic solution may come in the form of protecting national self-interest. The prospect of cheap, selective weapons of mass destruction of this nature poses a threat to national security across the globe, which may lead to a treaty akin to the St Petersburg declaration of 1868, or the Comprehensive Test Ban Treaty;  which is an idea that was proposed by Professor Stuart Russell in his Reith Lectures of December 2021. However, if no such agreement takes place, the widespread use of LAWS will demand a careful and nuanced application of the key principles of International Humanitarian Law in this new context.  

Caspar is a law graduate, legal journalist and legal researcher based in London. He has worked as a Legal Associate in immigration and asylum law, where he assisted with the preparation of human rights appeals and judicial reviews, as well as drafting advice on a range of areas across immigration, asylum and nationality law. He now works on a major public inquiry and volunteers in his spare time for RCJ Advice, who provide free legal advice to those unable to afford it. He graduated with a BA in English from the University of Cambridge before studying Law and aims to pursue a career at the bar specialising in public, immigration and human rights law.

Linkedin