Different countries apply different levels of protection and promotion of human rights in relation to the development of technology. A relevant measure that demonstrates this is the degree of internet freedom a country allows—for which Freedom House produces an annual “Freedom on the Net” report. Whilst Iceland, Estonia, and Canada are listed as the top three countries in terms of internet freedom, China, Iran, and Syria have the least internet freedom.
2020 was the year dominated by the COVID-19 pandemic, multiple lockdowns, and the normalising of remote working and schooling. This pushed the vast majority of us online and highlighted society’s increasing reliance on technology. But how does this affect our fundamental rights and freedoms, and what should be done for our protection? From social media usage to healthcare, we must shine a light on these issues. Never thought about it? Read on for a discussion with Henry Peck, Technology and Human Rights Researcher at the Business & Human Rights Resource Centre.
Q: WHAT IS THE ROLE OF THE BUSINESS AND HUMAN RIGHTS RESOURCE CENTRE?
A: It is a hub for information on business and human rights, tracking the human rights impacts of more than 10,000 companies worldwide. Much of this takes place by working with civil society to elevate the issues and needs of workers and communities.
The work includes:
investigating allegations of abuse directly with companies to advance corporate accountability; and
using evidence of company policies and practice to push for effective human rights due diligence.
I personally cover cases involving human rights allegations associated with technology, with a particular focus on expanding transparency and accountability in the surveillance technology industry. This includes identifying and analysing global developments related to technology and human rights, tracking the human rights performance of technology companies, and seeking responses to allegations of abuse.
Q: WHAT HAVE SOME OF THE MAJOR HIGHLIGHTS OF 2020 FOR TECH AND PROMOTING HUMAN RIGHTS BEEN?
A: Against the huge challenges of 2020, there were notable advances with respect to technology and human rights, particularly concerning the impact of technology on discrimination. The #BlackLivesMatter protests that took place across the world helped spur vital examinations of racism and racially-inspired human rights violations, including those through emerging technologies that reproduce discriminatory prejudices.
The use of algorithms with Big Data in machine-learning technologies, such as artificial intelligence, can deepen existing inequalities and cause other harms along racial lines. This is not new—but 2020 saw more research and evidence demonstrate these harms to a larger audience. Prime examples include:
The study that found algorithmic decision-making in a Boston healthcare system prevented kidney transplants to Black patients;
The wrongful arrest of a Black man in Michigan due to a false facial recognition match;
The UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, published important reports on ways emerging digital technologies exacerbate discriminatory systems, and the abusive use of technologies to marginalise targeted populations; and
More companies and municipalities took action—Amazon, IBM, and Microsoft suspended police use of their facial recognition technology following the protests against police brutality, and cities like Portland, Oregon followed San Francisco’s lead in banning its use, also reflecting privacy concerns.
Of course, these gains occurred in a landscape where research into AI is dominated by corporate funding and constraints, which do not automatically align with ethical concerns. Timnit Gebru, who works at the forefront of AI ethics research, was recently forced out of her job at Google over apparent conflicts over research findings and business aims. She was also one of few Black women on the research team, in a field with little diversity. But the swift and widespread condemnation of the dismissal, both among Google employees and externally, reflects growing concern over the development of AI and representation in tech.
Q: WHAT HAVE BEEN THE SETBACKS IN 2020?
A: The last year brought worrisome developments in surveillance technologies.
The right to privacy was hindered, as there was a rush to present technological solutions to combat the spread of the virus in response to the COVID-19 pandemic. This included some governments introducing location tracking apps and other measures to monitor individuals as well as collecting user data without sufficient transparency or oversight as to usage or storage—and with unproven efficacy from a public health standpoint. Ordinary safeguards and procedures were overridden in the context of the pandemic. Such intrusive measures are difficult to remove once imposed and damage public trust in government, privacy rights, and the health of societies.
The restriction of the right to freedom of expression was seen in concerning reports about the sales and uses of spyware to target journalists and activists. Citizen Lab documented the use of surveillance technology allegedly built by NSO Group against journalists at Al Jazeera. Research by Forbidden Stories revealed a booming market for these tools in Mexico, where it is often used against journalists. Meanwhile the tools themselves are becoming harder to detect, with the use of invisible “zero-click” exploits. But there were also moves to better regulate this industry—the US State Department issued human rights guidance relating to the export of surveillance technologies, and the EU is close to expanding the list of dual-use surveillance technology banned from sale to autocratic regimes. WhatsApp’s lawsuit against NSO Group continues, and Microsoft and other tech companies recently filed a joint amicus brief in support of the case, as did Access Now and various human rights groups.
Q: HOW HAS COVID-19 IMPACTED TECHNOLOGY AND HUMAN RIGHTS?
A: I have touched on the dangers of overreach in the tracking apps and monitoring that followed the beginning of the pandemic, increasing already extensive surveillance of populations in some countries.
But there was also a whole range of secondary effects that emerged with the shift by millions of people to remote work and education. The use of employee monitoring software shot up, allowing employers to monitor the real-time activity of workers, including their browsing activity, the content they type, and even their webcams. Moving to remote work may require new ways of supervising, but such software presents challenges to the right to privacy and private life. In education, the use of proctoring apps raised other concerns as exams moved online, putting students’ every movement under a virtual microscope and flagging as suspicious harmless actions such as looking away to think. Already disadvantaged students have been particularly affected by the perils of this system—such as students of colour encountering authentication delays or rejection because of the technology’s faulty facial recognition programming.
The pandemic has also significantly impacted gig economy workers. Delivery drivers and others finding work through digital platforms were both indispensable to the new demands of this period and particularly vulnerable to infection and loss of income in these conditions. Uber drivers in the US said the company's inconsistent sick pay policy pushed them to continue working even when ill. In Australia, a spate of delivery driver deaths prompted demands for the sector to pay minimum wages and improve safety standards. But in California, platform companies poured funding into a campaign to pass Proposition 22, which allows them to continue classifying drivers as contractors without employee protections, potentially leaving drivers with pay below the minimum wage and insufficient health care protections.
Q: WHAT MORE CAN BUSINESS AND GOVERNMENT DO TO PROTECT HUMAN RIGHTS AS TECHNOLOGY ADVANCES?
A: Both can include civil society in building human rights protections into business practices. There are excellent resources available, such as the Danish Institute for Human Rights’ new guidance on how to conduct human rights impact assessments of digital activities, and the UN OHCHR’s B-Tech Project’s guidance to help best implement the UN Guiding Principles on Business and Human Rights in the technology space. These include boosting transparency and access to remedy.
On a larger scale, businesses and governments can do a huge amount to protect human rights alongside technological advancements by enacting comprehensive data protection and stringent controls to lessen the harms of digital spyware.
Q: WHAT ADVICE WOULD YOU GIVE TO PEOPLE TO PROTECT THEIR RIGHTS ONLINE?
A: A good start is to go through Tactical Tech’s Data Detox Kit, freely accessible on their website. It contains simple steps to limit your data trail and improve your digital habits. The Electronic Frontier Foundation has a heap of other great resources for protecting rights online, including their Surveillance Self-Defense guide.
Q: WHERE DO YOU PREDICT TECH DEVELOPMENT WILL GO IN THE NEXT FEW YEARS?
A: Keep an eye on the issue of mobile location data. The past year saw a number of stories around seemingly innocuous apps selling location and other personal data to vendors, who then sold that data to US military contractors and the government, where it is reportedly been used for immigration enforcement and other practices. US senators have asked the Department of Homeland Security for more information on how the data is used, and the ACLU launched a lawsuit against federal authorities over the practice.
There is likely to be action around the relationship of social media companies to the content posted on their platforms. A US law known as Section 230 of the Communications Decency Act currently protects platforms and websites from most lawsuits related to user-generated content, but US President Joe Biden has pledged to reform this law. There are many arguments for preserving its essence to protect smaller platforms and websites from punishing lawsuits, while updating it to push internet companies to accept greater responsibility for the content on their sites and share ad revenue with the journalistic outlets that author material reproduced on these platforms.
The issues of ethics in AI will continue to grow—many executives are not aware of potential AI bias, and the technology’s usage will increase across a variety of sectors. There is also its impact on freedom of expression, in the way that algorithms determine the content that is elevated and taken down on social media. This profoundly affects what material is seen and risks being misused for corporate or political purposes. It remains a vital area for attention.
FINAL THOUGHTS
There is no doubt that there are numerous benefits of using technology, such as greater access to information, access to more opportunities, and the ability to share ideas with others outside our immediate communities. However, it is important to ensure our rights are not curtailed particularly in situations where technology has been used to extend practices of surveillance and censorship. I would hope that this topic will be integrated into teaching curriculum, as it will no doubt play an even bigger role in our lives in years to come. To secure and promote our human rights and freedoms, Henry explained that: “there is a long way to go to ensure emerging technologies are ethical, contestable, and transparent, but critical engagement is increasing”.
Napassawan is currently a paralegal in London. She has an LLB from Swansea University and an LLM LPC from BPP University in Cambridge. She is interested in promoting human rights injustices and bringing them into the public domain, with a particular focus on children due to their vulnerable status.