Recent developments have further raised the stakes in relation to the ethics of AI systems and applications. The reach of AI systems into every aspect of daily life was dramatically accelerated due to COVID-19, which shifted more social and economic activities into the digital world. Leading technology companies now have effective control of many public services and digital infrastructures through procurement or outsourcing schemes. For example, governments and health care providers have deployed AI systems and algorithmic technologies at unprecedented scale in applications such as proximity tracking, tracing, and bioinformatic responses, triggering a new economic sector in the flow of biodata.
Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.
Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.
Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.
So, why hasn't more been done? There are three main issues at play:
First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.
Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if an AI system operates at a level of maturity required to avoid failure in complex adaptive systems.
Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as "ethics washing" – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.
Let's be clear: every choice entails tradeoffs. "Ethics talk" is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.
The second major issue is that to date all the talk about ethics is simply that: talk.
We've yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives.
For all the attempts to involve a broad range of stakeholders, rules remain unclear or non-existent, commercial and geopolitical interests are diverging, innovation continues to happen in small, secretive, and private spaces, and decisions are concentrated in a few hands while inequalities grow at an alarming rate.
Major areas of concern include the power of AI systems to enable surveillance, pollution of public discourse by social media bots, and algorithmic bias: in a variety of sensitive areas, from health care to employment to justice, various actors are rolling out AI systems that may be brilliant at identifying correlations but do not understand causation or consequences.
Overstating the capabilities of AI systems is a well-known problem in AI research and machine learning. Extrapolating and memorizing patterns in the data to address very specific tasks is a far cry from understanding the actual problem it is designed to solve. Too often those in charge of embedding and deploying AI systems do not understand how they work, or what potential they might have to perpetuate existing inequalities and create new ones.
Decision-makers also typically have a poor understanding of the scientific methods and complexity underpinning the what, how, and why of building an AI system. Often, they take a myopic, tech-solutionist optimization approach to applying AI systems to global, industry and societal challenges, blinded by what is on offer rather than what the problem requires. There are also questions unaddressed around potential downstream consequences such as the environmental impact of the resources required to build, train, and run an AI system, interoperability and the feasibility of safely and securely interrupting an AI system.
A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.
There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.
A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone's daily lives.
Non-technical people wrongly assume that AI systems are apolitical by nature, not comprehending that structural inequalities will occur, particularly when such systems encounter situations outside the context in which they were created and trained. Concepts such as ethics, equality, and governance can be viewed as lofty and abstract. There is a critical need to translate these concepts into concrete, relatable explanations of how AI systems impact people today.
Part of the challenge in widening the public discourse is that there is still no agreed methodology or shared vernacular to discuss how AI systems are embedded or applied or agreed methods to assess, estimate, and verify the effects of AI systems on individuals, society, or international stability. Language is rooted in culture—the new is only understood by analogy to the familiar—and finding the right metaphors or tools is particularly difficult when so much about AI is unlike anything that has gone before.
So, what are we, collectively, not getting right in our appetite to promote responsible uses of AI systems and algorithmic technologies? How and what is needed to widen the discussions around ethics and AI, and speed the translation from principles to practice towards greater accuracy, reliability, and validity?
It has become a cliché to say the world is at an inflection point in history. But as people who are closely involved in the AI and ethics discourse, we know that it is true—and that we have very little understanding of what we are getting into, or wisdom to navigate the uncertainties ahead.
Large-scale technological transformations have always led to deep societal, economic, and political change, and it has always taken time to figure out how best to respond to protect people's wellbeing. Such is the transformative potential of AI; however, we do not have much time to get it right. Moreover, the belief that incompetent and immature AI systems once deployed can be remedied or assuming an antidote exists, especially one compatible with cybersecurity, is an erroneous and potentially dangerous delusion.
We must work to build on existing expertise and networks to expedite and scale ethics-focused AI initiatives to strengthen anthropological and scientific intelligence, establish a new dialogue, empower all relevant stakeholders to meaningfully engage, unpack practical and participatory ways to ensure transparency, ascribe responsibility, and prevent AI from driving inequality in ways that potentially create serious social harms.
Subscribe to the Carnegie Ethics Newsletter
Receive future AIEI articles and podcasts
Anja Kaspersen and Wendell Wallach are senior fellows at Carnegie Council for Ethics in International Affairs. Together with an international advisory board, they direct the Carnegie Artificial Intelligence and Equality Initiative (AIEI), which seeks to understand the innumerable ways in which AI impacts equality, and in response, propose potential mechanisms to ensure the benefits of AI for all people.