In 1922, Carl Schmitt, a German political and legal commentator, published a book titled Political Theology. In it, he argued that the law should be suspended in a state of disaster or emergency. According to Schmitt, the law impedes effective action to mitigate or end the disaster, is predicated on normality and therefore cannot produce predictable results in a state of abnormality, and cannot respond to a crisis when the nature thereof is uncertain or evolving. In this setting, a sovereign – who is imbued with the power to declare this state of exception in addition to responding to it in accordance with self-set parameters – is the only effective actor.
The COVID-19 pandemic has ignited debates regarding the extent to which governments should declare this state of exception and the encroachment on fundamental freedoms in the name of responding to it. This tension between executive response to crises and the protection of human rights is clearly demonstrated in policies of mass surveillance and their implications for the right to privacy. While there can be no doubt of the widespread use of surveillance technology pre-COVID, the pandemic has exacerbated fears of these practices being heightened and institutionalised to a point of no return.
A SNAPSHOT OF SURVEILLANCE TECHNOLOGIES BEFORE AND DURING COVID-19
Surveillance technologies are complex, interoperable, and often discreet or seemingly unobtrusive. They can be broadly understood as any system (including hardware and/or software) designed to collect, retain, process, or share data associated with, or capable of being associated with, a person, with another person or group for a purpose that may or may not be disclosed. Far from being limited to a few authoritarian states, state-sanctioned surveillance technology has become a ubiquitous feature of governments across the political spectrum. There is no shortage of examples, but one need not look further than whistleblower Edward Snowden’s explosive revelations in 2013 to understand the extent of state surveillance (in liberal democracies) and its geopolitical implications. Despite various actors raising concerns at domestic and international levels, including the United Nations General Assembly and the Human Rights Council, surveillance technology shows no sign of slowing down. On the contrary, there has been a significant uptake in big data and artificial intelligence-driven surveillance technologies around the world.
Given this background, it is perhaps not surprising that COVID-19 spurred the deployment of surveillance technologies en masse in a bid to curb the spread of the virus. China capitalised on its advanced capabilities in this arena in order to identify individuals who may be COVID-19 positive and enforce a quarantine. These measures included facial recognition, communication monitoring, drone technology, and smart policing. Russia reportedly installed 100,000 surveillance cameras equipped with facial recognition technology to enforce self-isolation. In Tunisia, robot police were deployed to ensure citizens adhered to lockdown regulations. Mobile contact tracing applications have had an almost universal appeal, with dozens of states around the world adopting it as a voluntary, and in some cases involuntary, tool to locate and track positive cases.
One should avoid hasty conclusions that these technologies are inherently bad or that they are always deployed as means for nefarious ends. Indeed, as an example, the aim of effectively responding to a pandemic can hardly be regarded as an illegitimate purpose. Whether this has actually been achieved with the use of these technologies has been questioned, particularly as regards contact tracing applications. Be that as it may be, a common theme in the international COVID-19 response is that states are leveraging the efficiency and scalability of digital surveillance technologies to combat this crisis. Although these practices are not entirely new, the expansion and entrenchment thereof globally is precedent-setting and may have disastrous consequences – particularly at a time when many people are vulnerable and willing to accept great incursions on their fundamental rights.
THE RIGHT TO PRIVACY IMPERILLED
The use of surveillance technology has the potential to violate many human rights, such as the rights to freedom of expression, freedom of association, and equality. This article focuses on the right to privacy, primarily because other human rights are at risk by virtue of the right to privacy being eroded in this context.
Article 12 of the Universal Declaration of Human Rights states that “no one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” Article 17 of the International Covenant on Civil and Political Rights echoes this wording, but further provides that the interference or attacks must not be unlawful. Although the concept has a wealth of jurisprudence behind it, privacy can be generally understood “as the presumption that individuals should have an area of autonomous development, interaction and liberty, a ‘private sphere’ with or without interaction with others, free from State intervention and from excessive unsolicited intervention by other uninvited individuals”. To date, there are a myriad of regional and national laws, policy guidelines, and codes that have augmented the right to privacy as enshrined in these instruments and have solidified the international consensus that privacy matters.
Even so, the right to privacy is not unqualified and may be lawfully limited. The international law principles of legality, necessity, and proportionality are yardsticks by which to measure any erosion of the right to privacy in order to assess if it is arbitrary or unlawful, and thus in violation of international human rights law. Moreover, regional and domestic data protection frameworks cater for exceptions to the general rule. For example, the European Union’s General Data Protection Regulation recognises that special personal data, such as a person’s biometric or health data, may be processed if any of the circumstances stipulated in article 9(2) prevail, such as where processing is necessary “for reasons of public interest in the area of public health”. Although this would not provide exemption from compliance with other processing limitations, there is general acknowledgement that the right to privacy cannot be upheld at the expense of other rights or legitimate interests.
However, much like the pandemic itself, widespread use of surveillance technologies by states may have both foreseen and unforeseen consequences that do not align with national, regional and/or international legal principles as regards the treatment of this fundamental right. First, not all states have robust privacy protection frameworks; in many places the adoption of these frameworks, including the appointment of an independent regulator, are pending. For instance, only 32 out of 55 African states have enacted data privacy laws. Although regional and international instruments do exist, the lack of domestic measures may frustrate effective oversight and remedial action in the event of violations of the right to privacy by both state and non-state actors.
Second, given the nature of the beast, states are seemingly reluctant to share information on surveillance policies, and this creates information asymmetries. This was evident in the roll out of contact tracing applications in South Africa, Indonesia, Poland, Russia, and the United Arab Emirates, among others, which had little-to-no information regarding backend use and privacy controls. While the paucity of information regarding the collection and use of data may be legitimate, necessary, and proportionate in certain circumstances – such as intelligence pertaining to national security threats – it is difficult to imagine how greater transparency regarding the collection, use, and retention of data could harm the COVID-19 response. If anything, it may increase trust in institutions and result in better civilian cooperation.
Third, the expansion of surveillance technologies during the pandemic has sparked fears that these measures will not be pared down in future. There is significant precedent to be concerned about the continued use or repurposing of surveillance methods in the world after COVID-19, including those methods where participation may have been originally voluntary (such as applications). Rasha Abdul Rahim, the deputy directory of Amnesty International’s technology division, reminds us that: “If history has taught us anything in the post-9/11 era, it is that once governments put in place surveillance measures it is very difficult to then roll them back.” The absence of sunset clauses – provisions in law which specify the termination of a law or policy at a specified time – would be concerning in this context. It should be noted that the necessity of an end point applies to the surveillance technology itself, as well as the personally identifiable data gleaned from it. The latter should not be retained for longer than is necessary to achieve the legitimate purpose for which it was originally collected, unless further processing requirements are met (where they exist) under relevant laws.
Fourth, even where national and international laws are complied with in the utilisation of surveillance technologies, the risks of cybersecurity breaches and abuse of data are evident. This was demonstrated in the debates on centralised versus decentralised models of contract tracing applications. Emanuele Ventrella, data protection advisor at Trilateral Research Limited, summarises the distinction thusly:
“Under the centralised approach, the identifiers of the infected user and those of its contacts are stored in a central database, enabling increased visibility of the data by governments and health services…Under the decentralised approach, identifiers are generated by the user’s phone and only the identifiers broadcasted by the infected user are shared with the backend server.”
Notably, the centralised approach has been criticised on the basis that, among other things, the centralised database will become a “treasure trove” for hackers and a tempting surveillance tool for governments beyond the pandemic. While both models use backend servers, in this context, the decentralised approach has been lauded for the unique key created by each user’s phone, which should prevent backend server operators or hackers from being able to associate the data with an identifiable individual.
As is clear from the above, even where the use of surveillance technologies is desirable and permissible, there is a pressing need to guard against the risks associated therewith at all times.
CHECKS AND BALANCES
On 2 April 2020, Amnesty International, Human Rights Watch, and numerous other organisations published a joint statement that reiterated the importance of balancing human rights to protect individuals and societies during the pandemic. This statement called on governments to respond to the pandemic with enhanced digital surveillance only if eight conditions are met. In summary, these conditions are that surveillance measures must: (i) be lawful, necessary, and proportionate; (ii) be time-bound; (iii) be for the purpose of responding to COVID-19 only; (iv) be subject to robust security measures; (v) address the risk of these tools being used to discriminate against people; (vi) be transparent, particularly as regards private-public relationships; (vii) incorporate safeguards against abuse; and (viii) allow for stakeholder engagement. It is encouraging that similar statements, recommendations, and guidelines have been produced at national and international levels to affirm the need for a principled response to the use of surveillance technology during COVID-19.
The aforementioned recommendations are useful guiding principles for states, but consideration must be given to the roles of various stakeholders who can or should be involved in monitoring surveillance measures and holding governments to account. A multi-stakeholder approach is required. Individuals must be fully apprised of the collection, use, and retention of their data, and must be equipped with effective remedies to challenge misuse. Businesses must understand their obligations as regards the right to privacy, and in the context of mass surveillance, must guard against government requests for access to data that do not conform to domestic or international law. Individuals, civic organisations, regulatory and judicial bodies, and law enforcement must interrogate public-private partnerships that enable surveillance systems to ensure that the confluence of corporate and political interest is legitimate in all cases. And lawmakers and regulators must be up to the task of understanding rapidly evolving technologies and their implications to better cater for their regulation where appropriate – during COVID-19 and beyond.
In Carl Schmitt’s state of exception, legal fetters should be abandoned so that the state can efficiently double down on its crisis response. Many consider his writings extreme, in no small part due to his fervent support for Nazism. However, there is support for thinking along these lines, which sees the law (or parts of it) as inhibiting rather than enabling during a crisis. In these uncertain times, it is vital to remember that fundamental human rights are made of stronger stuff and find their application tested most often in times of social turbulence. While legitimate limitations are explicitly accepted, and rights often require careful balancing, no one is above the law, whether during a crisis or not. It is important that countries around the world do not use the COVID-19 state of exception to deepen the hold of mass surveillance in the long run. Robust privacy protection frameworks (including progressive recommendations made during this crisis) and multi-stakeholder involvement are essential safeguards. One can only hope that this critical moment in our history will inspire greater reflection on the mass surveillance architecture established around the world, those who enable it, and why.
This article is part of a series collaboration between Human Rights Pulse and robos of Tech Law and Policy (r-TLP) in December 2020 aiming to highlight issues pertaining to technology and human rights. r-TLP is an initiative based in India that promotes publishing articles/opinion pieces on the intersection of technology, law, and policy. r-TLP is positively biased towards women, trans and non-binary people with a goal of providing a platform to these marginalised identities.
Prashanti is an admitted attorney of the High Court of South Africa, specialising in corporate and commercial law. She is interested in the nexus between law and technology.