Anja Kaspersen and Wendell Wallach argued cogently in their recent article for a systemic reset of AI and technology governance, calling for AI ethics not only to prioritize transparency and accountability, but to preserve fundamental values and human dignity.
What should this reset look like? Many are grappling with this question. There are now numerous sets of AI governance principles and an increasing number of audit processes for AI applications, as well as emerging legislation and technical standards. But amidst this plethora of commitments, it's currently difficult for companies to work out what they should do, for governments to know how to regulate, and for individuals to know what standards to expect and what remedies they have if something goes wrong.
There is currently no clear lodestar in the ethics conversation. Every time a major new innovation hits the mass market—be it the Metaverse, or large language models such as ChatGPT—we revert to the beginning of the dialogue. Regarding ChatGPT, most of the latent ethical issues have not even been identified, let alone addressed. Meanwhile, technology and corporate profits develop and grow apace.
We need basic principles of governance that ensure, as per Kaspersen and Wallach, that technologies are being used for the common good, rather than for the benefit of a select few. These principles need to be of general application, and adaptable to the range of technological innovations ahead.
The role of human rights
These principles already exist: human rights. Human rights are the crystallization of ethical principles into norms that have already been developed and implemented over the last 70 years. They already distill a wide range of ethical views into concrete legal principles. Human rights can protect fundamental values and human dignity in the online world, just as they do in the offline environment. Rather than grappling to invent new standards, governments and companies alike should embrace existing human rights standards and processes as the starting point for AI governance. Human rights are the existing legal framework of fundamental values and human dignity onto which context-specific protections can be added.
The issue of fairness is an example. AI ethicists regularly debate what "fair" means in the context of AI governance: in what situations it means treating like cases alike, and in what situations it may mean treating them differently. This is often discussed as if it is a new problem, taking no account of discussions of fairness in human rights law nor of how the courts have interpreted it in concrete cases. One reason why AI ethicists have struggled to get fairness right is that they have often failed to recognize the context-specific approach to fairness that is a hallmark of judicial decision-making.
There are several advantages to placing human rights at the heart of AI governance. First, they are already widely accepted internationally. The Universal Declaration of Human Rights (UDHR) is known everywhere. Every country in the world is not only party to at least some of the United Nations human rights treaties, but appears periodically before the UN Human Rights Council and human rights monitoring bodies to defend its human rights record. Human rights already comprise a universal language and framework of ethical norms, which have taken many years to negotiate. Their international acceptance is particularly important at a time when gaining agreement on new norms at global level is very unlikely.
Second, human rights are relatively clear: Human rights practitioners have been traversing and resolving issues such as the meaning of "fairness" and "equality" for the last 70 years. Third, they are not extreme: On the contrary, they offer a method of balancing the rights and interests at stake in any given situation, using tests of necessity and proportionality that are familiar to courts and governments. And they offer scope for different interpretation in different countries and contexts, subject to overall parameters. That is not to say that they are perfect: they will need adaptation to AI, just as they have adapted to other developments over the years.
Dispelling the myths
So why aren't human rights already at the heart of AI governance? They are held back by various myths, such as:
- Myth: Human rights prevent innovation. Reality: Human rights do not prevent innovation, but entail compliance with minimum standards and create a level playing field, domestically and internationally, for innovators.
- Myth: Human rights are complex. Reality: They are no more complex than other systems of rules—the problem is simply that they aren't widely taught or understood.
- Myth: Human rights are about governments. Reality: Companies have human rights responsibilities too, as agreed unanimously by governments at the United Nations in 2011, and widely endorsed by businesses and civil society.
- Myth: Human rights are radical or vague. Reality: They provide specific protections against harm and discrimination for every adult and child, everywhere, on a day-to-day basis, not just the protection in extreme situations so often discussed in the media.
- Myth: Human rights are overseas, primarily concerned with people in crisis situations such as armed conflicts and humanitarian emergencies. Reality: Human rights (also known as civil liberties) provide legal protections of fundamental values and human dignity in every country of the world.
Human rights are often misunderstood as offering extreme responses. To give an example, it's commonly asserted that human rights would entail that facial recognition should not be used. This is not the case: Instead, human rights law requires that any infringement with the privacy of those whose faces are captured is prescribed by law, necessary for a legitimate purpose such as public safety or prevention of disorder or crime, and proportionate to that legitimate aim. This means, in short, that facial recognition that enables the police or others to amass vast amounts of data on people's movements is not permissible; but facial recognition carefully prescribed so as to avoid or minimize such data collection, while aiming to have a positive impact on public safety and security, may be permissible. This is not a radical approach but a sensible one.
In some parts of the world, human rights have attracted political baggage: they have been labelled by governments and media as a block on sensible policies, rather than as a safeguard of everyone's freedoms in the face of potential abuse. And they are often misunderstood, seen as promoting absolutist or extreme positions rather than nuanced ones. For example, in some European countries including the UK, parts of the media have criticized human rights for limiting governments' capacity to deport asylum seekers. But we should not be distracted by a few controversial topics from seeing the value of human rights to everyone, everyday: in promoting equality and fairness, in putting children at the heart of decisions concerning them, in raising standards for disabled people, in setting standards of police and prison treatment, in underpinning policies on health and education, and so on.
Subscribe to the Carnegie Ethics Newsletter for more on AI & human rights
Raising human rights awareness
One major practical issue is that, in many parts of the world, human rights are not widely understood outside the body of lawyers and activists who champion them. In particular, they are often not familiar to computer scientists and coders. This needs to change: there should be more interdisciplinary education on human rights at universities, more human rights training for technologists as well as for corporate executives, more human rights expertise in governments. The conversation about AI governance in boardrooms and legislatures needs to embrace civil society, rather than leaving them to a separate conversation on the sidelines. Human rights needs to be part of the mainstream dialogue, not merely a topic for legal experts.
In sum, the challenge for AI governance in 2023 is not only to embrace ethics, but to take human rights as the starting point for conversations about AI ethics and regulation. Companies developing and using AI should bring in more human rights expertise, and investors should measure this through ESG frameworks. Governments should draw from human rights in developing AI regulation and policies. International organizations should stress the role of human rights in AI governance.
Doing so will be a shortcut to ensuring that AI does indeed preserve fundamental values and human dignity, and is indeed harnessed for the common good, not only in a few countries but globally.
Kate Jones is an associate fellow with Chatham House and author of the recently released Chatham House Research Paper, AI Governance and Human Rights: Resetting the Relationship, London: Royal Institute of International Affairs, 2023.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.