Definition & Introduction
AI accountability refers to the idea that artificial intelligence should be developed, deployed, and utilized such that responsibility for bad outcomes can be assigned to liable parties. AI-enabled technology often implicates accountability concerns due to the opacity and complexity of machine and deep learning systems, the number of stakeholders typically involved in creating and implementing AI products, and the tech's dynamic learning potential.
AI systems are often criticized for being a "black box," meaning that the process behind how an output was achieved cannot be fully explained or interpreted by its users. If AI decision-making cannot be explained or understood, assigning responsibility and holding parties accountable for harmful outputs becomes very difficult.
For more on the subject of AI accountability, explore the curated resources below.
A loss of transparency, traceability, and accountability
"Large language models (LLMs) are inscrutable stochastic systems developed in closed environments, often by corporations that are unwilling to share information about their architecture. This makes it difficult to know how and why a system achieved a particular output. That, in turn, makes it difficult to trace the cause of—and hold the right people accountable for—harms that might arise from system outputs."
Download the communiqué to learn more about the trade-offs associated with using LLMs in diplomacy.
AI Accountability Resources
OCT 6, 2022 • Podcast
AI for Information Accessibility: Ethics & Philosophy, with Emad Mousavi & Paolo Verdini
Emad Mousavi and Paolo Verdini discuss the ethics and philosophy behind AI. They speak about the Ethics Bot project and explore accountability questions.
JUN 8, 2020 • Podcast
Mysterious Machines: The Road Ahead for AI Ethics in International Security, with Arthur Holland Michel
How do we learn to trust AI systems and what are the implications of this technology as nations confront mass protests in a post-pandemic world?
For more on AI ethics, subscribe to the Carnegie Ethics Newsletter
Discussion Questions
- Why is transparency important in AI systems?
- Who should be held accountable when an AI system makes a mistake?
- To what extent should AI developers be accountable for unintended consequences of the systems they create?
- What responsibilities, if any, do companies have in making their AI systems explainable?
- How can we make complex AI systems more interpretable and what role, if any, should education play in that process?
- What ethical and technical principles should guide the development of AI systems to protect against AI accountability concerns?
- How, if at all, can the issue of AI accountability be regulated?
- How important is AI explainability in critical areas like healthcare and criminal justice?
A Framework for the International Governance of AI
In light of the rapid development of generative artificial intelligence applications, Carnegie Council for Ethics in International Affairs and the Institute of Electrical and Electronics Engineers (IEEE) co-developed this framework drawing on ideas discussed in 2023 workshops.
The proposal "is offered to stimulate deeper reflection on what we have learned from promoting and governing existing technologies, what is needed, and next steps forward."
Additional Resources
How Cities Use the Power of Public Procurement for Responsible AI
The Carnegie Endowment for International Peace analyzes how California’s local governments are transforming a traditionally mundane function into a strategic lever.
ReadDeveloping Accountability Mechanisms for AI Systems Is Critical to the Development of Trustworthy AI
The Center for American Progress submitted a public comment to the National Telecommunications and Information Administration’s (NTIA) AI accountability policy.
ReadEthics, Transparency and Accountability Framework for Automated Decision-Making
The UK government released a seven-point framework to help government departments with the ethical use of AI-powered decision-making systems.
ReadIBM Design for AI | Accountability
IBM posits that "every person involved in the creation of AI at any step is accountable for considering the system’s impact in the world, as are the companies invested in its development."
Read