Human Rights Pulse

View Original

Automating Insecurity: Decision Making In Recruitment

THE PROBLEM

In 2018, Reuters reported that Amazon was using an artificial intelligence (AI) recruitment algorithm to review job applications. It was revealed that this algorithm discriminated against women by disproportionately favouring male applicants. While Amazon disbanded this system, the use of such automated decision making (ADM) in recruitment has grown. Indeed, the following year LinkedIn reported that 67% of hiring managers were using some form of ADM.

Discrimination which emerges from this use of ADM impacts human security, specifically economic security. According to a 1994 United Nations report, human security is shaped by two major components; “freedom from fear and freedom from want”. Economic security provides freedom from want when individuals have a secure, adequate income. Through this, people can live a dignified life where they can financially support themselves. Economic security feeds into other aspects of security such as food and health security as it ensures that funds are available for food and healthcare. Discrimination in recruitment, in contrast, can extend unemployment and lead to economic insecurity

With the relative novelty of ADM we must ask: does international human rights law provide individuals with enough protection from this discrimination? To answer this question, we need to look at what ADM is and what protection current law provides.

AUTOMATED DECISION MAKING

The UK’s Information Commissioners Office defines ADM as the process of making a decision by automated means without humans, often using a complex AI algorithm. These decisions can be based on factual data, digitally created profiles, or inferred data. The

problem is not in the existence of ADM, the problem is when the algorithms replicate human bias, causing discrimination.

Algorithms can cause discrimination when they are trained on biased data. For example, if a recruitment algorithm is trained on historically successful CVs from a company that mostly employs men, then it will view CVs from men more favourably. This was the case for the algorithm that Amazon used; it systematically penalised applicants who included their involvement in the “women’s” chess team or went to female only colleges, as this was not included in the CVs used for training. The algorithm made a decision based on experience or education, but this reflected their gender and so discrimination occurred through a proxy.

Through this algorithm, ADM results in indirect discrimination. It is “indirect” because it appears that  the system applies equally to everybody, but in reality it disadvantages certain groups. In the Amazon example, the algorithm largely discriminated along gender lines, but ADM can also discriminate against other protected characteristics, potentially causing insecurity for many of these groups.

THE LAW

Since ADM in recruitment can result in economic insecurity, it is important to assess the human rights provisions available to protect individuals. At the international level, article 6 of the International Covenant on Economic, Social and Cultural Rights (ICESCR) provides a right to work and specifies in its general comment that discrimination in recruitment based on protected characteristics, such as race or gender, is forbidden. The state is required to protect this through enacting legislation or other measures. Similar provisions are found at the regional level in the Revised European Social Charter (article 1), the African Charter on Human and Peoples’ Rights (article 15), and the Additional Protocol to the American Convention on Human Rights (article 6). These articles could be employed, through the relevant mechanisms, to provide protection for those who experience this kind of discrimination.

However, these articles would only be useful in cases where it can be proved that the algorithm has caused discrimination; unfortunately, this is not always possible. Indeed, most companies that use ADM for recruitment rely on software protected by trade secrets. This

prevents prosecutors (or expert witnesses) from being able to access the code, making it challenging to concretely prove discrimination. Even in cases where access to the code is granted, the “black box” nature of many AI algorithms means that it isn’t always possible to understand how the computer made the decision. Again, making it challenging to prove discrimination. As a result, anti-discrimination legislation is not adequate in providing protection from this form of discrimination.

Instead, a direct reference to ADM in international human rights law would enable better protection. The European Union’s General Data Protection Regulation (GDPR) provides a potential framework for this. In article 22, the GDPR provides a right “not to be subject to a decision based solely on automated processing…which produces legal…or similarly significant effects”. This is significant because it doesn’t specify that an individual must prove that they have been discriminated against, the right is violated if ADM is solely used. This enables us to bypass the problem of proprietary or “black box” software.

While this provision initially seems adequate, it is limited in two ways. Firstly, it uses the term “solely” meaning that algorithms making an initial sift of applications but not making the final hiring decision might not be covered. Secondly, article 22 §2(c) states that ADM is acceptable with the “data subjects' explicit consent”; individuals who are desperately seeking a job may give consent, not because they want to, but due to concern that not doing so would impact their application.

Despite these flaws, GDPR article 22 is better than any other international provision for this specific kind of discrimination. Indeed, international organisations and committees developing human rights law, could draw on the idea outlined in GDPR, address its flaws, perhaps through removing ‘solely’ and article 22 §2(c), and through this, expand human rights protection. Indeed, preventing a decision made solely by ADM would ensure that economic security was not restricted through an algorithm and that access to the right to work was not unduly limited in this way. Such a provision could be introduced through an update to the general comment of ICESCR article 6 or included in a new multilateral human rights treaty focusing on technology.

THE FUTURE?

Amazon’s use of ADM in recruitment is only the tip of the iceberg. Algorithms make decisions everyday which impact on human security and we do not yet have adequate human rights protection for this. While anti-discrimination legislation may provide protection when discrimination can be proved, this is likely to be a minority of cases. Instead, perhaps soft-law modelled on the GDPR delivered as an update to the general comment on article 6, or a new international treaty would be more adequate in ensuring protection. Technology is changing our world more rapidly than ever before; it is time to ensure that human rights law keeps up the pace.

Caragh currently interns at the UN Department of Peace Operations following a research position at the office of the Children and Young People’s Commissioner Scotland. She has also recently completed an LLM in Human Rights at Edinburgh University and in 2018 completed a BA in History and Theology at Durham University. Between her BA and LLM, she undertook management roles in both the charity and private sectors. She currently sits on the board of directors for a student focused charity and contributes, as a founding member, to the development of a digital due process legal clinic in Scotland.

Linkedin