Tech Giants Start to Tackle Racial Bias

IBM decided to cancel its facial recognition technology program in the aftermath of George Floyd’s brutal murder by Minneapolis police on 26 May 2020. The Black Lives Matter (BLM) protests erupted throughout June 2020, highlighting police brutality and violence against people of colour in the US and around the world. The escalation of the protests led to demands for the demilitarisation and defunding of the police departments around the US, including with regards to the use of highly sophisticated technology used to survey American citizens. 

On 8 June 2020, the CEO of IBM, Arvind Krishna, finally acknowledged in a letter to US Senators that facial recognition technology enables discrimination against Black people and announced the halting of further development of facial recognition technology by IBM. Along with Amazon and Microsoft, tech companies have provided resources for both federal and local police departments to track potential criminals and surveil citizens. However, all three companies have now vowed to stop selling facial recognition technology to police departments. 

Amazon sold its software, Rekognition, to US police departments, but on 10 June 2020, Jeff Bezos announced it would suspend selling for one year. On 11 June 2020, Microsoft’s president, Brad Smith, stated that the company does not sell facial recognition software and will not do so until national legislation is passed to regulate such software so it is in line with human rights standards. Although tech giants in the US have made such promises, there are still “smaller” companies, like Facewatch that provide similar facial recognition services in the UK, that have not made any statements about further regulation requirements in the aftermath of George Floyd’s death. 

FACIAL RECOGNITION’S DISCRIMINATION 

Facial recognition technology is deployed to identify suspects, and a Black suspect is 2.5 times more likely to be matched with a mugshot on record. In some cases, driver’s license photos on record are used in conjunction with facial recognition tech while conducting investigations. The technology, too, is less accurate when analysing Black features, because it was originally trained on white faces. Furthermore, sometimes if no results are produced, police units compare the blurry surveillance photos with a celebrity’s photo in order to get a match. 

If there is too much reliance on the technology, this can be an obstacle to defence counsel in courtrooms. The decision-making process by the technology is not accessible to the public, which means that legally challenging the reliability of the technology is difficult. As a result, judicial reviews or appeals against a finding of guilt in criminal court cannot properly be called into question, because the evidence is based on impenetrable algorithms. Ultimately, this lack of transparency is a threat to human rights, notably the right to a fair trial, protected in Europe under article 6 of the European Convention on Human Right (ECHR), and the right to due process, protected in the US under the 5thAmendment. 

TECHNOLOGY REFORM ON THE HORIZON

Academic circles have proposed reforms for years as Silicon Valley began to see the effects of racial profiling that their technologies exacerbate. Until now, the same tech companies promising to stop selling software were the ones lobbying the government to adopt such tools. Their lobbying ignored the clear warnings issued by researchers from MIT in a 2019 report as well as through federal reports that considered 99 commercially available algorithms—although Amazon, Google, Apple, and Microsoft did not provide their facial recognition algorithms to the report. 

IBM’s Krishna claims that technology can be used in an unbiased way by the police, referring both to body cameras and data analytics. Body cameras have often proven that the police have killed unjustifiably—for example, in the case of Rayshard Brooks in Atlanta—and are vital to transparency. 

Data analytics, however, continue to contribute to racial profiling by the police. One paper argues policing around the world has entered the “big data age”. In the United States, federal law enforcement has “government databases augmented on occasion by private data originally collected by for-profit data-broker companies”. Collection through private contractors can use locality, race, and other personal information to increase the police’s level of suspicion about an individual. Their arrests are thus influenced by profiled patterns of criminality and as a result perpetuate discriminatory policing practices that are masked by “reliable’” technology use, especially in areas that are mostly populated by people of colour. 

Thus, data analytics continues to be a threat to human and civil rights, just as facial recognition technology is, since the process to arrive at a suspect not only violates the right to a fair trial because of its lack of transparency, but it also poses a threat to the right to respect privacy and family life (article 8 ECHR) as well as the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures (4th Amendment of the US Constitution). IBM’s primary focus is on data analytics and, without reforms to this area of their business, its technology continues to threaten the protection of fundamental rights. 

LOCAL ACTION AGAINST FACIAL RECOGNITION DISCRIMINATION 

The decision by IBM to stop promoting facial recognition tech is one step to end discrimination against people of colour in the US, the UK, and globally. Local and federal governments are taking steps as well. For example, Boston has banned the use of facial recognition technology by the police. Following this, action was taken on 25 June 2020, wherein four Democratic US senators, including Bernie Sanders and Elizabeth Warren, co-sponsored the Facial Recognition and Biometric Technology Moratorium Act of 2020 in the Senate. The bill was then introduced in the House by Representatives Ayanna Pressley, Rashida Tlaib, and Yvette Clarke. This type of legal, political, and economic will is crucial to safeguarding human rights vis-à-vis technology. 

0 (1) - Olivia Fraser.jpg

Karen is a recent graduate in Politics, Philosophy and Law (LLB) from King's College London. Before this, she has lived in six countries and speaks five languages (English, Portuguese, Spanish, French and Dutch). She is currently based in Amsterdam and hopes to become a barrister. Karen loves to run outside, sail in the summers and eat her way through every city she goes to.