Artificial Intelligence is the ability of a programmed artificial machine or system to sense, act, and learn with intelligence, often defined by completing tasks normally only performable through naturally occurring intelligence in humans and animals. Researchers such as Stuart Russell have described AI as something that acts rationally on the basis of what it has perceived. AI researchers have utilised a variety of techniques such as reinforcement learning, which utilises the maximisation of cumulative rewards, and artificial neural networks, which simulate biological neural networks found in animals. The goal of this proliferation in learning varies from wider goals such as artificial general intelligence (AGI), whereby a machine can learn any arbitrary task or problem a human can, to solving for specific purposes, for example natural language processing and sentiment analysis to analyse large amounts of unstructured natural language data.
APPLICATIONS OF AI FOR HUMANITARIAN PURPOSES
Artificial Intelligence has been utilised in humanitarian breakthroughs across a plethora of industries. This ranges from Deepmind’s AlphaFold predicting 3D models of protein structures in solving protein folding, Ithaca’s success in restoring and identifying the original location of ancient lost texts, and the World Food Programme’s real time HungerMap which monitors the severity and magnitude of hunger worldwide. Additionally, AI is capable of diagnosing lung tumours with more accuracy than radiographers. In terms of applications in robotics, companies such as Everyday Robots are focused on creating intelligent, broad use social robots. AI is set to proliferate at an astounding rate, and this increasing capability and ubiquity across all areas of life raises significant ethical dilemmas. As we trend towards AGI and more advanced applications of AI, the datasets powering these innovations are increasing in size and quality alongside both computing power and human expertise. With such large amounts of data being utilised at scale, privacy and security of this data is a necessity. Data privacy has been a growing concern throughout the last decade, with concerns being validated in scandals such as Cambridge Analytica’s data misuse.
With the growing ubiquity of IoT (internet of things) connected devices, increasing amounts of personal biometric data is being collected, even sometimes through AI empowered children’s dolls such as Hello Barbie. Consumers are often not aware of how their data is being consumed and used in AI models. Improving trust regarding the privacy of data, providing clear governance and transparency is crucial. If AI is used to solve ethical issues, it must be set up organisationally through governance, policy, and principles in a way that corresponds with and fosters an ethical mindset. Requiring an interdisciplinary and holistic approach from experts across business, law, government and research, and ethics in AI are critical to ensuring the healthy development of AI technologies and their impact on society. It should be noted, not all robots utilise AI and likewise AI is not only utilised in robotics. A robot in the context discussed below reflects a physical entity which can sense, react, and solve for its environment.
THE GROWING IMPORTANCE OF ROBOT ETHICS
Robot ethics have been thrust into the forefront of societal consciousness since the provision of Isaac Asimov’s Three Laws of Robotics introduced in the fictional ‘I, Robot’, namely that robots should not kill, should always obey human commands except in violation of the first law, and that it must act in self-preservation except when in violation of the first two laws. Whilst having been developed further by researchers in modern times to accommodate the growing real-life adoption of robotics, it is interesting to note the role fantasy and fictional portrayals have played in both highlighting and exacerbating fears around AI empowered robotics. Whether the dystopian, ‘killer robot’ portrayals through Artificial Intelligence, The Matrix or The Terminator; or those invoking questions around the boundaries and balance between human and machine such as Her or Ex-Machina, a lot of ethical, and societal questions have been raised in public consciousness. This creates a heavily deterministic, dystopian future in society’s mind, that these consequences are inevitable despite them being extremely far from the realities of robotic development.
The potential of technological singularity due to super intelligent AI exceeding the capabilities of humans, for example, is another theme often raised in science fiction - which has been dismissed by Andrew Ng as equivalent to worrying about overpopulation on Mars due to the relational timescales. Alongside fiction, real applications of AI have shown signs of perilous behaviour, such as the racism taught to Microsoft’s Tay bot by Twitter users. Ultimately, robots, AI, and their impact on society are driven by the humans which designed them alongside their inherent biases, moralities, and use cases. Whilst this can be for causes beneficial to humanity, they likewise could cause unprecedented damage to people and infrastructure - whether intentionally or accidentally, with dramatic influence or through softer, subtler biases which we shall explore further in the next paragraph.
KEY ETHICAL CONSIDERATIONS IN ARTIFICIAL INTELLIGENCE AND ROBOTICS
Given the swathes of data often utilised in learning and in training AI models, there is an ever-present risk of algorithmic bias. Examples of data bias have been well documented, for example Caroline Criado Perez’s work in Invisible Women, highlighting how everything from offices, transport, and medical treatment have all been designed implicitly for men due to the skew in data sets or, in the case of car safety testing, the test dummies used ultimately leading to significantly worse outcomes for women in incidents such as car crashes.
Given that AI is arguably a collection of inputs from a programmer or through self-generated AI data sets, these biases risk being entrenched in intelligent social robots particularly through feedback loops. Robots themselves are overwhelmingly portrayed as women, whether through AI assistants such as Amazon’s Alexa or Hanson’s Sophia - the gendering of which theoretically serves no purpose other than marketing. Likewise, robots are overwhelmingly designed as ‘white’, again not reflecting the diversity of society and creating a strong risk of racial bias, further exemplified through AI image libraries used for generation by DALL-E, equating search prompts such as ‘man sitting in prison’ or ‘angry man’ with pictures featuring people of colour.
AI should ultimately be a reflection of society as a whole, to avoid propagating society's biases and oversights which are both subtle and more overt. If a robot is marketed with specific qualities, does reflecting specific genders or races perpetuate stereotypes about these? Is a ‘White’ robot the most suitable for showcasing intelligence and trustworthiness? Must a ‘Female’ robot be portrayed as caring and friendly? Ultimately there is no need to gender robots in this manner, whilst anthropomorphism of robots remains an interesting area of ethical debate.
The work of Kate Darling in human robot interaction through her book The New Breed for example, highlights that the “anthropomorphisation” of robots itself is unnecessary and causes ethical dilemmas by its very nature. She correlates this to, amongst other aspects, a biological hardwiring to movement, showcasing people feel an emotional attachment and empathy to their Roomba vacuums. We could arguably see a similar reality to that of animals, where some are utilised almost as means to achieve a specific end, whilst others are elevated for their intrinsic value as pets that are considered family members. In Japan for example, 114 ‘deceased’ Sony Aibo robot dogs were given a traditional memorial as loved members of the family. Alongside this, the very programming of language has a profound impact. Even something as simple as saying ‘please’ and ‘thank you’ in conversations with robotics can have an astounding reciprocal effect in terms of language reinforcement, which is then applied ethically by people in their treatment of others.
Humanising robots can have negative implications such as the impact of emotional attachment around harm to robots that should arguably not be thought of as sentient, living beings. There is a greater risk of this with social robots, due to their physical presence, as opposed to a simply virtual AI. As robots begin to be utilised by companies, there is a blurring line between providing a useful companion for people, particularly children, and exploiting this personal connection to serve advertising or data collection purposes. By contrast, robots such as Paro, the therapy robot seal, utilise this innate emotional and moral connection by aiding and providing comfort to autistic children. Treating robots as social entities ultimately leads to a question of robot rights, and at what point they become treated similarly to animals in this regard. Sophia, the humanoid robot, has even been granted citizenship in Saudi Arabia, leading to questions around its political implications.
As questions relating to rights of AI and robots emerge, AI creation and intellectual property remains a grey area. The issue of AI Art, characterised by projects such as The Next Rembrandt, and the completion of Beethoven’s unfinished symphony, highlight the question of where ownership lies for AI creations. Shutterstock, for example, is launching a creators’ fund to reimburse artists as they sell AI generated artwork in partnership with OpenAI. Alongside achievements, mistakes and unintended consequences also requires owners - if an autonomous vehicle has to choose between hitting two adults or one child, an ethical dilemma in its own right, where does the responsibility lie? Ownership could be across developers, companies, public domain, or other entities.
Lastly, we consider the ethical challenge often thrust into the limelight of social narratives - that robots and AI are coming to ‘steal our jobs’ through automation. An ethical and human first approach to this involves balancing jobs lost with those gained, and those that have simply changed. Treating automation ethically involves not displacing humans, but empowering them and making their working lives easier and more interesting. Human labour is ultimately not a simply replaceable commodity in this regard.
BUILDING AN ETHICS-LED FUTURE WITH AI
AI and robotics have the potential to revolutionise the world, solving humanity's biggest problems, and unlocking a new generation of prosperity for all. This success, however, is reliant on ensuring AI and robots reflect not just our ideal of present society, but also that of the future. Friendly AI and robotics should benefit humanity, both through their implicit purpose and design but also in ensuring they make ethical decisions coupled with clear governance and policy enabling this. The impact AI and robotics will have on humanity, is ultimately dependent on the ethical inputs provided by humans. As such, it is important to drive holistic assessments of the ethical concerns detailed in this article to ensure we can build a healthy, sustainable future for all.
Raj spent time working in The Hague in human rights before leading and advising emerging technology strategy and innovation for the largest global companies. Alongside his current career in Web3 Strategy, he mentors startups from war torn countries.