Despite its omnipresence, the general public may not yet have fully considered the implications of artificial intelligence (AI). AI is so well-integrated into our daily lives that it is nearly impossible to differentiate between self-made decisions and AI-made decisions, even in the most menial tasks. AI, being cost-effective, may even usurp employment opportunities of human beings. Despite its far-reaching implications, the regulatory frameworks which currently govern AI are lax. International Human Rights Law (IHRL) is still traditional in its focus, despite a pressing need for laws to adapt to changing technologies. This article investigates what reforms might help to build a human rights-compliant regulatory framework for AI.
THE EXISTING LEGAL FRAMEWORK
Human rights have had a mere peripheral presence in talks surrounding AI’s effects on global society. In March 2018, the United Nations Independent International Fact-Finding Mission on Myanmar found that the social media giant Facebook and its AI driven news feed played a determining role in circulating hate speech and inciting violence amidst a crisis of possibly genocidal proportions. It is clearly detrimental that the overarching international law framework falls short in dealing with AI and its wide-ranging effects on all aspects of human life.
This problem was also addressed in a webinar titled “Artificial Intelligence and Human Rights,” held on 27 May 2020 as part of a series of webinars on law and technology. In that talk, the Head of the Information Society Department of the Council of Europe described the AI-challenges to universal human rights. These rights inter-alia include:
(a) the right to privacy and family life, which has special relevance in times of COVID-19, owing to the governmental health monitoring programmes;
(b) the right to liberty and security and right to fair trial, which could be curtailed by ‘predictive policing’ and ‘predictive justice’ mechanisms;
(c) the right to freedom of expression, which is severely hampered by surveillance capitalism; and
(d) the right to participate in democratic elections free from undue influences, which AI can undermine, as evinced by the infamous Cambridge Analytica episode.
While regulatory models of most sectors take up a legal, rights-based framework, the discourse surrounding AI is strangely centred around ethics. Yet corporations, rather than states, dominate the AI field, and to trust corporations–motivated by profit–with the application of ethics not encoded in law is naïve. The algorithms built by corporations significantly impact both active and passive users of technology and so there must be algorithmic accountability, since algorithms enable decision-making.
Microsoft, in January 2018, released “The Future Computed,” which sets out the tech giant’s promise to be steered by ethical principles such as “fairness, reliability and safety, transparency and accountability, privacy and security, [and] inclusiveness” in its AI-related work. Noble words perhaps, but ethical language is not clearly defined and does not carry with it concrete legal duties. Within the ethics paradigm, for example, what do “reliability and safety” mean? Who determines how inclusive “inclusiveness” is? There is no common consensus on the meaning of important terms used freely in the ethics model of AI management.
UN GUIDING PRINCIPLES ON BUSINESS AND HUMAN RIGHTS
It is telling that strategies adopted by private actors intentionally fail to rely on IHRL. Even the UN Guiding Principles on Business and Human Rights (“the Principles”), which act as a set of guidelines for states and corporations “to prevent, address and remedy human rights abuses committed in business operations,” do not mention the ethical AI agenda.
The Principles act as the global authoritative standard on business and human rights, and were endorsed by the UN Human Rights Council in 2011. The standing pillars of the Principles are “protect, respect and remedy”–states must protect human rights, corporations must respect them, and all whose rights have been violated must have access to an effective remedy. Whilst the Principles can serve as a vital analytical template for those involved in AI-related work, instruments specifically targeting AI are imperative due its constantly evolving and invasive nature.
THE TORONTO DECLARATION
The Toronto Declaration (“the Declaration”) is such an instrument. Released by Amnesty International, Access Now, and some partner organisations in 2018, the Declaration protects the right to equality and non-discrimination in “machine-learning systems”. The aim of the Declaration is to apply existing IHRL standards to the development and use of machine-learning systems. Machine learning, a subset of AI, makes it possible to construct a mathematical model from data, including a large number of variables that are not known in advance. Simply put, a machine learning system learns how to function based on the data input it receives. The Declaration also states that human rights, by virtue of being “universal, indivisible and interdependent and interrelated,” are binding, actionable laws. Further, it provides that:
The human rights law and standards referenced in this Declaration provide solid foundations for developing ethical frameworks for machine learning, including provisions for accountability and means for remedy.
The above provision clearly implies that AI (i.e. machine learning) can be regulated adequately by relying on the established foundations of IHRL. The Declaration also strives to promote non-discrimination, alleviate discriminatory outcomes, enhance transparency, and provide effective remedies to those harmed by AI and its impacts. A laudable effort, the Declaration could be effective in countering AI’s capricious expositions if adhered to extensively, by state and non-state actors alike.
WHITE PAPER ON ARTIFICIAL INTELLIGENCE
The European Commission, in February 2020, issued its White Paper on AI - A European Approach to Excellence and Trust, the first significant policy document of its kind formulated by a global power. Here, the Commission attempts to create “a regulatory and investment-oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new Technology,”through the introduction of universal human rights values into the AI space.
Apart from analysing the risks associated with AI (focusing on the legal requirements for high-risk uses of the AI technology) and detailing transparency obligations, the White Paper also lists certain liability measures to be taken while dealing with AI. It would be wise to remember that the General Data Protection Regulations (GDPR) began their journey as a White Paper, and now have far-reaching legal effects. Only time will tell how effectively the White Paper on AI could enable the creation of a responsible AI ecosystem.
In sum, there is an urgent need for algorithmic accountability, not only to keep private actors in check but also to prevent states from waging wars of terror on minorities by weaponizing AI. A robust international law framework could be a powerful tool in countering AI’s dark side by fortifying its ethical and the human rights-related outlook. We must pursue the codification of hard law by way of the existing infrastructure of soft law efforts to guide responsible AI. As a global, democratic effort, IHRL must be at the forefront of AI-related regulatory consultations. Unparalleled power must always be accompanied by unrivalled responsibility, especially in these unprecedented times.
Aparajitha has a Bachelor of Laws (LL.B.) degree from I.L.S. Law College, India (2015) and an LL.M. in Public International Law from Leiden University. She currently works as a Lecturer at Jindal Global Law School and handles the subject, ‘Global South and International Law’, taught by Prof. (Dr.) B.S. Chimni. Her areas of interest include International Criminal Law, Public International Law and International Humanitarian Law.