Human Rights Pulse

View Original

How Testing Neurotechnology On Humans Is Testing Human Rights

Recent news articles have announced human trials for developments within neurotechnology. While the prospect of such technology has the potential to solve many medical issues and offer a new way to interact with the brain and nervous system, the current approach of governance and oversight in this area is lacking. Indeed, a recent news article highlighted that Neuralink, a startup co-founded by Elon Musk, may have misled regulators. On its own this is concerning, but on a larger scale the technology is wading into uncharted territory where there is limited understanding, publicity, and regulation. 

Neurotechnology seems to be slowly gaining attention. In the 76th session of the UN General Assembly held in 2021, Ahmed Shaheed, the Special Rapporteur (SR) on Freedom of Religion or Belief, provided his interim Report on the Freedom of Thought. This report raised concerns about violation of privacy and potential sanctions that could arise from “inferred thought”. Further, the SR highlighted that “neurotechnology can already modify or manipulate thoughts inside the brain”. However, the SR also stated that neurotechnology techniques have not yet been adapted to humans. Only a few months after the SR’s interim report was published in October 2021, human trials have been announced for neurotechnology. 

As the SR stated, many experts already agree that contemporary and existing legal frameworks are unprepared for this new technology. There have been promising developments in this area, but nothing offers the level of protection so desperately needed when the risks presented by neurotechnology are considered.

WHAT IS NEUROTECHNOLOGY?

Neurotechnology aims to map and understand the brain. With this comes inherent risks and the potential to severely undermine our human rights. The consumer technology market and economy use data as their main “currency” whereby users often sacrifice their data in exchange for “free” products and services. Companies can then use this data for marketing, analysis, and the improvement of products and services to make them more appealing to the end-user. However, there is a risk that brain data could become commodified. This would incentivise private organisations to trade sensitive personal data that could be used in many different ways, many that we are not yet aware of.

Neurotechnology is growing and evolving rapidly. A recent Organisation for Economic Co-operation and Development (OECD) report showed that between 2008 and 2016, 16,273 patents for health-related neurotechnology were filed across the top ten priority filing locations. As neurotechnology becomes more advanced and available, it is increasingly possible that this technology will impact and undermine human rights. Neurotechnology presents risks to many of the central elements of what it means to be human: dignity, autonomy, the right to freedom of thought, and the right to privacy. Autonomy is the feeling that you are in control of your actions. It has been described as a sense that you are the one who is generating or causing an action, and therefore that you are in control of your actions. Consequently, as neurotechnology has the potential to adjust the way we think, feel, and behave, it poses a clear risk to personal autonomy. 

HOW WILL THIS BE REGULATED?

Due to the nature of technology itself, neurotechnology runs the risk of falling into a “Collingridge dilemma”. This occurs at the early stages of development when too little is known to regulate emerging technology. While later once it is more extensively developed and used, it is more difficult to modify by regulation. 

A current example is the social media companies and the lack of transparency with the use of artificial intelligence (AI) and algorithms. This clearly highlights the need to consider regulatory interventions in neurotechnology sooner rather than later. It is possible that the risks posed by the development of neurotechnology are greater than the current risks of AI. However, it is worth noting that neurotechnology and AI are often used together. One example of this is predictive speech synthesis for those who are unable to communicate verbally. 

In June 2021, the Knowledge Transfer Network released an interesting article titled “A transformative roadmap for neurotechnology in the UK”. The paper recommended a set of international protocols and standards for the use of neurotechnology, and emphasised the vital need for an ethical framework from the outset of this technology development. Considering human trials in neurotechnology are imminent, it is critical that an ethical framework is created soon. The United Kingdom Government has established the Regulatory Horizons Council, which is intended to be an independent expert committee which focuses on technology developments. The purpose is for the Council to advise the government on regulatory reform to support its rapid and safe introduction. This is a useful safeguard for the UK, but regulations should be developed on an international level.  

A framework was proposed by the OECD in December 2019, which was the first international standard in the area of neurotechnology. The OECD Council published its “Recommendation on Responsible Innovation in Neurotechnology” (RRIN), which offered a solid framework for parties to follow. The RRIN set out nine key principles:

  1. Promoting responsible innovation;

  2. Prioritising safety assessment;

  3. Promoting inclusivity;

  4. Fostering scientific collaboration;

  5. Enabling societal deliberation;

  6. Enabling capacity of oversight and advisory bodies;

  7. Safeguarding personal brain data and other information;

  8. Promoting cultures of stewardship and trust across the public and private sector; and

  9. Anticipating and monitoring potential unintended use and/or misuse.

As a guide, this framework provides a solid starting point of what should be focused on while neurotechnology is being developed. Unfortunately, the framework currently does not have sufficient legal power to be effective in its purpose and application.

WHAT NEXT?

In December 2021, it was reported that Diana Saville, co-founder of the brain science accelerator BrainMind, was attempting to arrange a conference for 2023 to create an ethics charter within the private sector based on OECD recommendations. While this would be a positive step forward, without any overarching regulation, private organisations would have ultimate control over ensuring that human rights are protected as neurotechnology develops.

However, because many instruments and agreements for defending human rights were created before the advent of neurotechnology, there is a need to understand what can be done to mitigate the risks it presents. It is important that existing instruments are assessed to see if they can be interpreted or amended to include and cover neurotechnology. There may be a need for new regulations to be created to govern the development and deployment of such technology. Alternatively, there could be the creation of new ‘neuro rights’ to ensure fundamental human rights are not infringed by neurotechnology.

Interpreting legislation such as the General Data Protection Regulation to include brain data under the definition of personal data could be one small step in providing some legal protection for the privacy of our brains and thoughts. Further, it is possible that the Oviedo Convention, which regulates biomedicine, could be adapted to help manage the risks presented by neurotechnological developments.

What is clear is that little attention is being paid to how humanity can be protected from the possible impacts of neurotechnology. As the development of neurotechnology accelerates, it is time that discussions on how to properly regulate this new technology get up to speed.

Adam Whitter-Jones holds a LLM in Human Rights Law from Swansea University. Adam's key area of interest is the intersection of human rights and neurotechnology.
Linkedin