Envisioning Modalities for AI Governance: A Response from AIEI to the UN Tech Envoy

Sep 29, 2023

This submission has been organized by Carnegie Council's Senior Fellow Anja Kaspersen and Carnegie-Uehiro Fellow Wendell Wallach, co-directors of the Artificial Intelligence & Equality Initiative (AIEI). The submission was constructed with insights from individuals and institutions across the academic, technology, and governance spaces.

A Framework for the International Governance of AI

Overview

Promoting the benefits of innovative technologies requires addressing potential societal disruptions and ensuring public safety and security. The rapid deployment of generative artificial intelligence (AI) applications underscores the urgency of establishing robust governance mechanisms for effective ethical and legal oversight. This concept note proposes the immediate creation of a global AI observatory supported by cooperative consultative mechanisms to identify and disseminate best practices, standards, and tools for the comprehensive international governance of AI systems.

Purpose

This initiative directs attention to practical ways to put in place a governance framework that builds on existing resources and can have an immediate effect. Such a framework would enable the constructive use of AI and related technologies while helping to prevent immature uses or misuses that cause societal disruption or pose threats to public safety and international stability.

From Principles to Practice

Numerous codes of conduct or lists of principles for the responsible use of AI already exist. Those issued by UNESCO and the OECD/G20 are the two most widely endorsed. In recent years, various institutions have been working to turn these principles into practice through domain-specific standards. A few States and regions have made proposals and even enacted constraints upon specific uses of AI. For example, the European Commission released a comprehensive legal framework (EU AI Act) aiming to ensure safe, transparent, traceable, non-discriminatory, and environmentally sound AI systems overseen by humans. The Beijing Artificial Intelligence principles were followed with new regulations placed upon corporations and applications by the Cyberspace Administration of China. Various initiatives at the federal and state level in the United States further emphasize the need for a legislative framework. The UN Secretary-General also recently proposed a High-Level Panel to consider IAEA-like oversight of AI.

Proposed Framework

Governance of AI is difficult because it impacts nearly every facet of modern life. Challenges range from interoperability to ensuring applications contribute to—and do not undermine—the realization of the SDGs. These challenges change throughout the lifecycle of a system and as technologies evolve.A global governance framework must build upon the work of respected existing institutions and new initiatives fulfilling key tasks, such as monitoring, verification, and enforcement of compliance. Only a truly agile and flexible approach to governance can provide continuous oversight for evolving technologies that have broad applications, with differing timelines for realization and deployment, and a plethora of standards and practices with differing purposes.

Mindful of political divergences around issues of technology policy and governance, it will take time to create a new global body. Nevertheless, specific functions can and should be attended to immediately. For example, a global observatory for AI can be managed within an existing neutral intermediary capable of working in a distributed manner with other nonprofit technical bodies and agencies qualified in matters related to AI research and its impact on society.

To establish an effective international AI governance structure, five symbiotic components are necessary:

1. A neutral technical organization to sort through which legal frameworks, best practices, and standards have risen to the highest level of global acceptance. Ongoing reassessments will be necessary as the technologies and regulatory paradigms evolve.

2. A Global AI Observatory (GAIO) tasked with standardized reporting, at both general and domain-specific levels, on the characteristics, functions, and features of AI and related systems released and deployed. These efforts will enable assessment of AI systems’ compliance with existing standards that have been agreed upon. Reports should be updated in as close to real-time as possible to facilitate the coordination of early responses before significant harm has been effected. The observatories which already exist, such as that at the OECD, do not represent all countries and stakeholders, nor do they provide oversight, enable sufficient depth of analysis, or fulfill all the tasks proposed below.

  • GAIO would orchestrate global debate and cooperation by convening experts and other relevant and inclusive stakeholders as needed.
  • GAIO would publish an annual report on the state of AI which analyzes key issues, patterns, standardization efforts, and investments that have arisen during the previous year, and the choices governments, elected leaders, and international organizations need to consider. This would involve strategic foresight and scenarios focused primarily on technologies likely to go live in the succeeding two to three years. These reports will encourage the broadest possible agreement on the purposes and applicable norms of AI platforms and specific systems.
  • GAIO would develop and continuously update four registries. Together, a registry of adverse incidents and a registry of new, emerging, and (where possible) anticipated applications will help government and international regulators to attend to potential harm before deployment of new systems.
  • The third registry will track the history of AI systems, including information on testing, verification, updates, and the experience of States that have deployed them. This will help the many countries that lack the resources to evaluate such systems. A fourth registry will maintain a global repository for data, code, and model provenance.

3. A normative governance capability with limited enforcement powers to promote compliance with global standards for the ethical and responsible use of AI and related technologies. This could involve creating a “technology passport” system to ease assessments across jurisdictions and regulatory landscapes. Support from existing international actors, such as the UN, would provide legitimacy and a mandate for this capability. It could be developed within the UN ecosystem through collaboration between the ITU, UNESCO, and OCHR, supported by global technical organizations such as IEEE.

4. A conformity assessment and process certification toolbox to promote responsible behavior and assist with confidence-building measures and transparency efforts. Such assessments should not be performed by the companies that develop AI systems or the tools used to assess those systems.

5. Ongoing development of technological tools (“regulation in a box”), whether embedded in software or hardware or both, is necessary for transparency, accountability, validation, and audit safety protocols, and to address issues related to the preservation of human, social, and political rights in all digital goods—each of which is a critical element of confidence-building measures. Developed with other actors in the digital space, these tools should be continuously audited for erroneous activity and adapted by the scientific and technical community. They must be accessible to all parties at no cost. Assistance from the corporate community in providing and developing tools and intel on technical feasibility is essential, as will be their suggestions regarding norms. However, regulatory capture by those with the most to gain financially is unacceptable. Corporations should play no final role in setting the norms, their enforcement, or to whom the tools should be made available.

We are fully aware that this skeleton framework begs countless questions as to how such governance mechanisms are implemented and managed, how their neutrality and trustworthiness can be established and maintained, or how political and technical disagreements will be decided and potential harmful consequences remediated. However, it is offered to stimulate deeper reflection on what we have learned from promoting and governing existing technologies, what is needed, and next steps forward.

Emerging and Converging Technologies

This framework has significant potential applications beyond the AI space. If effective, many of the components proposed could serve as models for the governance of as-yet-unanticipated fields of scientific discovery and technological innovation. While generative AI is making it urgent to put in place international governance, many other existing, emerging, and anticipated fields of scientific discovery and technological innovation will require oversight. These fields are amplifying each other’s development and converging in ways difficult to predict.

This proposal, developed by Carnegie Council for Ethics in International Affairs (CCEIA) in collaboration with IEEE, draws on ideas and concepts discussed in two June 2023 multi-disciplinary expert workshops organized by Carnegie Council’s AI & Equality Initiative and IEEE SA and hosted by UNESCO in Paris and ITU in Geneva. Participation in those workshops, however, does not imply endorsement of this framework or any specific ideas within the framework.

Workshop Participants (in alphabetical order):


Doaa Abu Elyounes, Phillippa Biggs, Karine Caunes, Raja Chatila, Sean Cleary, Nicolas Davis, Cristian de Francia, Meeri Haataja, Peggy Hicks, Konstantinos Karachalios, Anja Kaspersen, Gary Marcus, Doreen Bogdan-Martin, Preetam Maloor, Michael Møller, Corinne Momal-Vanian, Geoff Mulgan, Gabriela Ramos, Nanjira Sambuli, Reinhard Scholl, Clare Stark, Sofia Vallecorsa, Wendell Wallach, Frederic Werner.


Given the Absence of Hard Law, the Roles for Soft Law Functions in the International Governance of AI

by Wendell Wallach, Anka Reuel, and Anja Kaspersen

Abstract:

The advent of foundation models has alerted diplomats, legislators, and citizens around the world to the need for AI governance that amplifies benefits, while minimizing risks and undesired societal impacts. The prospects that AI systems might be abused, misused or unintentionally undermine international stability, equity, and human rights demands a high degree of cooperation, oversight, and regulation. However, governments are not acting quickly enough on putting in place an international hard law regime with enforcement authority. In the absence of such a regime, soft laws become a lever to help shape the trajectory of AI development and encourage international cooperation around its normative and technical governance. In this paper, we give an overview of key soft law functions in the context of international AI governance and mechanisms to fulfill them. We further propose the establishment of a Global AI Observatory in line with Mulgan et al. (2023) to fulfill functions that have not been (sufficiently) picked up by, or go beyond the mandate of, existing institutions.


The Case for a Global AI Observatory (GAIO), 2023

The authors of the following GAIO proposal are Professor Sir Geoff Mulgan, UCL; Professor Thomas Malone, MIT; Divya Siddharth and Saffron Huang, the Collective Intelligence Project, Oxford University; Joshua Tan, Executive Director, the Metagovernance Project; Lewis Hammond, Cooperative AI

Here we suggest a plausible, and complementary, step that the world could agree on now as a necessary condition for more serious regulation of AI in the future (the proposal draws on work with colleagues at UCL, MIT, Oxford, the Collective Intelligence Project, Metagov and the Cooperative AI Foundation as well as previous proposals).

A Global AI Observatory (GAIO) would provide the necessary facts and analysis to support decision-making. It would synthesize the science and evidence needed to support a diversity of governance responses and answer the great paradox of a field founded on data in which so little is known about what’s happening in AI, and what might lie ahead. Currently no institutions exist to advise the world, assessing and analysing both the risks and the opportunities, and much of the most important work is kept deliberately secret. GAIO would fill this gap.

The world already has a model in the Intergovernmental Panel on Climate Change (IPCC). Established in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information they can use to develop climate policies.

A comparable body for AI would provide a reliable basis of data, models, and interpretation to guide policy and broader decision-making about AI. A GAIO would have to be quite different from the IPCC in some respects, having to work far faster and in more iterative ways. But it would ideally, like the IPCC, work closely with governments providing them with the guidance they need to design laws and regulations.

At present, numerous bodies collect valuable AI-related metrics. Nation-states track developments within their borders; private enterprises gather relevant industry data; and organizations like the OECD’s Artificial Intelligence Policy Observatory focus on national AI policies and trends. There have also been attempts to map options for governance of more advanced AI, such as this from governance.ai. While these initiatives are a crucial beginning, there continues to be a gulf between how the scientists think about these issues, and how the public, governments, and politicians do. Moreover, much about AI remains opaque, often deliberately. Yet it is impossible to sensibly regulate what governments don’t understand.

GAIO could help fill this gap through six main areas of activity:

  • The first is the creation of a global, standardized incident reporting database concentrating on critical interactions between AI systems and the real world. For example, in the domain of bio-risk, where AI could aid in creating dangerous pathogens, a structured framework for documenting incidents related to such risks could help mitigate threats. A centralized database would record essential details about specific incidents involving AI applications and their consequences in diverse settings – examining factors such as the system's purpose, use cases, and metadata about training and evaluation processes. Standardized incident reports could enable cross-border coordination, decreasing the odds and mitigating the potential effects of miscommunication within a likely arms race over AI—with consequences as dire as the arms race over nuclear weapons.
  • Secondly, GAIO would assemble a registry of crucial AI systems focused on AI applications with the largest social and economic impacts, as measured by numbers of people affected, person-hours of interaction, and the stakes of their effects to track their potential consequences. It would ideally also set rules for providing access to models – to allow for scrutiny. Singapore already has a registry and the UK government is considering something similar within the country: but at some point similar approaches need to become global.
  • Thirdly, GAIO would bring together a shared body of data and analysis of the key facts of AI; spending, geography, key fields, uses, applications (there are many sources for these, but they are still not brought together in easily accessible forms, and much about investment remains very opaque).
  • Fourth, GAIO would bring together global knowledge about the impacts of AI on critical areas through working groups covering topics such as labor markets, education, media, and healthcare. These would orchestrate the gathering of data, interpretation, and forecasting, for example of potential effects of LLMs on jobs and skills. GAIO would also include metrics for both positive and negative impacts of AI, such as the economic value created by AI products along with AI-enabled social media impact on mental health and political polarization.
  • Fifth, GAIO could offer options for regulation and policy for national governments as well as potentially legislative assistance (drawing on lessons from Co-develop promoting DPI and IAEA), providing model laws and rules that could be adapted to different contexts.
  • Lastly, GAIO would orchestrate global debate through an annual report on the state of AI that analyzes key issues, patterns that arise, and choices governments and international organizations need to consider. This would involve a rolling program of predictions and scenarios focused primarily on technologies likely to go live in the succeeding two to three years, and could build on existing efforts such as the AI Index produced by Stanford University.

GAIO would also need to innovate. As indicated, it would need to act far faster than the IPCC, attempting quick assessments of new developments. Crucially, it could use collective intelligence methods to bring together inputs from thousands of scientists and citizens, which is essential in tracking emergent capabilities in a fast-moving and complex field. In addition, it could introduce whistleblowing methods similar to the US government's generous incentives for employees to report on harmful or illegal actions.

To succeed, GAIO would need a comparable legitimacy to the IPCC. This can be achieved through its members including governments, scientific bodies, and universities, among others, and by ensuring a sharp focus on facts and analysis more than prescription, which would be left in the hands of governments. It would ideally have formal links to other bodies with a clear role in this space – ITU, IEEE, UNESCO and the International Science Council. It should aim to collaborate as closely as possible with others already doing excellent work in this space, from the OECD to academic centers.

Contributors to the work of GAIO would be selected, as with the IPCC, on the basis of nominations by member organizations to ensure depth of expertise, disciplinary diversity, and global representativeness, along with maximum transparency to minimize both real and perceived conflicts of interest.

The AI community and businesses using AI tend to be suspicious of government involvement, often viewing it solely as a source of restrictions. But the age of self-governance is now over. What’s proposed here is an organization that exists in part for governments, but with the primary work done by scientists, drawing on successful attempts to govern many other technologies, from human fertilisation and cloning to biological and nuclear weapons.

In recent years the UN system has struggled to cope with the rising influence of digital technologies. It has created many committees and panels, often with grand titles, but generally with little effect. The greatest risk now is that there will be multiple unconnected efforts, none of which achieve sufficient traction. The media and politicians have been easily distracted by wild claims of existential risk, and few feel confident to challenge the major corporates, especially when they are threatened with the prospect of their citizens being cut off from the benefits of Open AI or Google.

So, legitimating a new body will not be easy. GAIO will need to convince key players from the US, China, the UK, the EU, and India, among others, that it will fill a vital gap, and will need to persuade the major businesses that their attempts at controlling the agenda, without any pooling of global knowledge and assessment, are unlikely to survive for long. The fundamental case for its creation is that no country will benefit from out-of-control AI, just as no country benefits from out-of-control pathogens.

How nations respond is bound to vary. China for example recently proposed to ban LLMs with “any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.” The US, by contrast, is likely to want the maximum freedom.

But shared knowledge and analysis is the necessary condition for nations to be able to decide their own priorities. Unmanaged artificial intelligence threatens the infrastructures and information spaces we all need to think, act, and thrive. Pooling knowledge in intelligent ways is the vital start on the way to better harnessing the benefits of artificial intelligence and avoiding the dangers.


“Middleware” and Modalities for the International Governance of AI

by Anja Kaspersen

The rapid rise and broad adoption of generative AI technologies underscore the urgent need for comprehensive governance that encompasses every step of an AI system throughout the complex stages of its history. When AI technologies are used in the structures and institutions of society with care, caution, and consistency, they have the potential to foster collective progress and elevate capabilities. However, if deployed rashly or without safeguards, they pose substantial risks. They have the potential to destabilize societies, jeopardize public and individual safety, amplify existing inequalities, and undermine international relations. The expansive reach and influence of these AI technologies underscore the urgency for envisioning new international governance modalities and safeguards.

Historically, the AI research and development sector has resisted government oversight in favor of self-regulation, while governments have lagged behind in even grappling with the need for oversight. This approach is inadequate. As a handful of corporations dominantly control AI technologies, which permeate every facet of our lives, and challenges in various areas continue to mount, there emerges a power imbalance. The need for rigorous global and ethical oversight and decisive regulatory action becomes undeniable.

Many states and regional groups are either implementing restrictions or contemplating them, especially as investments in AI-powered national language model technologies and downstream applications grow. However, some have not begun any formal deliberations, while others are voicing concerns about lagging in this rapidly evolving field due to their technological inexperience and limited engagement capability. This has resulted in a fragmented governance landscape marked by a disparate proliferation of models and a patchwork of rules that reflect differing cultural norms and strategic goals. Notably, only a handful of laws have been enacted that specifically address AI.

To some degree, competition in the regulatory landscape can be beneficial: centralized technology governance can stifle innovation and agility. However, too much fragmentation can allow unethical practices to become established through “forum shopping,” in which companies pivot towards jurisdictions with more lenient regulations. Addressing this risk requires global collaboration.

It is clear that AI needs a customized international governance framework drawing on the models developed in organizations, like the IPCC to assess the scale and impacts of climate change; the IAEA to enhance the contribution of atomic energy to peace, health and prosperity while ensuring that it is not used to further any military purpose; and CERN to advance research in fundamental physics. Such an approach, bringing together political and technical functions, would serve as a bridge between technologists and policymakers by balancing promotion and control to address the gaps left by current mechanisms and approaches. Furthermore, such a framework should promote cooperation and dialogue among stakeholders and engage the public in meaningful and informed discussions.

The ultimate objective should be binding global regulations, premised on instruments for monitoring, reporting, verification, and, where necessary, enforcement, ideally supported by a treaty. However, immediate steps towards this framework should be initiated now. These intermediary actions can be likened to "middleware" in computer science, which enhances interoperability among diverse devices and systems. This governance "middleware" can not only connect and align existing efforts but pave the way to more enforceable measures in future.

«Middleware» entities and providers could both be established under the auspices of existing organizations, newly created, or a combination bringing in other initiatives with demonstrated global legitimacy and technical proficiency. The activities proposed below need not all be performed by one entity and the functions can be distributed.

In seeking to achieve global AI governance, we face two pressing risks: the potential failure of well-intentioned, but overly ambitious efforts and proposals that limit their thrust to admirable objectives. The areas and modalities proposed below aim to overcome political divergence, organizational red tape, deterministic tech narratives, and efforts to maintain industry self-regulation. They are intended as tangible proposals designed to help navigate the complexities of global technological governance and overcome fragmentation. While this paper doesn’t delve into the specifics, it’s evident that some of the proposed areas for further consideration might require concessions regarding both emergent and existing intellectual property. It’s acknowledged that national security imperatives will play a significant role in how these issues are addressed. Therefore, highlighting the collective benefits and gains of demonstrating robust technology governance for international stability will be crucial for the acceptance of any proposals.

It’s important to note that the list is not ranked by importance. Instead, each item is presented as a potential approach and area for further exploration. Some of these might be immediately relevant, while others may not be. The activities suggested for middleware governance entities are based on the Framework for the International Governance of AI. This framework represents a collaborative effort between the Artificial Intelligence & Equality Initiative (AIEI) at Carnegie Council for Ethics in International Affairs (CCEIA) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA). The modalities proposed below are structured to reflect elements suitable for a more functional middleware approach to facilitate and expedite the transition to a formal international agreement, provided states concur and agree on a feasible path forward.

AI Impact Hub: A global hub, developed in close collaboration with relevant technical communities based on the intended objectives, could monitor high-impact AI systems, their uses, and edge cases worldwide. Such a registry can promote collaboration by documenting both the training data and the subsequent outcomes and impacts of these systems.

Assessment of Acceptance Levels: Evaluating and publicizing which normative frameworks, regulations, best practices, and standards in AI governance have garnered the most global acceptance and demonstrated impact can enhance their adoption. This is especially pertinent when considering access to generative AI technologies and models, transparency of training data, compute resources, environmental impact, declarations regarding product maturity, and the use of data commons.

Data Lineage and Provenance Registry: Capturing the derivation history of a data product, originating from its original sources, is essential to establish reliable data lineage and maintain robust data provenance practices. Relying solely on watermarking as a standalone initiative is insufficient, as it primarily addresses ownership and copyright. To ensure comprehensive data traceability and integrity, a comprehensive approach combining watermarking and detailed lineage tracking is necessary.

Interoperable Data Sharing Frameworks: Interoperable Data Sharing Frameworks: For effective global oversight, crafting and adopting international standards and frameworks that standardize data governance practices across regions is paramount. This becomes especially crucial considering the vast volume of private data used in the creation of proprietary technologies. By establishing these harmonized frameworks, we can promote enhanced interoperability and deepen public trust in worldwide data-sharing initiatives. This collaborative effort demands the combined actions of governments, organizations, and industry stakeholders to ensure that data exchanges are transparent, properly disclosed, and secure for all parties involved.

Global “red-teaming”: The establishment of independent, multi-disciplinary expert rosters, often referred to as “red teams,” with updates every two years to address conflicts of interest, and with participants required to declare any involvement in AI-related developments. These teams would be tasked with scrutinizing the positive and negative implications of AI across various sectors, developing scientific and engineering tools for assessing the safety and impact of AI systems through duration and value chain analysis, and anticipating future developments.

Technology Alignment Council: Establishing a standing consultation body that meets quarterly, consisting of global technology firms to share real-time governance insights, can foster cohesive and synergistic advancement of interoperable and traceable AI technologies. This method emphasizes continuous dialogue and collaboration among tech giants and stakeholders. It helps to prevent undue concentration of power and champions transparency, interoperability, and traceability in the pursuit of responsible AI development.

Global Best Practices: Policy templates and model safeguard systems could be devised to recognize the universal-use characteristics of AI technology. These systems should aim to balance the encouragement of beneficial AI applications with the establishment of strong controls to counteract adversarial uses and reduce harmful outcomes. Adopting this strategy may offer a more even playing field, especially for nations and entities striving to stay abreast of rapid technological progress and shifting regulatory environments. Achieving agreement on balancing the encouragement of beneficial AI applications with the establishment of strong controls to counter adversarial uses and reduce harmful outcomes is core to making any meaningful progress in AI (and other emerging tech) governance.

Annual Report: An annual report that compiles and synthesizes research on emerging AI trends, investments, and product launches—and evaluates relevant governance frameworks—can be game-changing. This report would also cover global drivers, potential threats, threat actors, and opportunities. By proposing potential scenarios, providing actionable recommendations for governments and international organizations, and including a detailed timeline to prevent delays in follow up actions, it would substantially strengthen informed decision-making and elevate public discourse.

Global Incident Database: A global database, crafted for both anonymous and identified reporting of significant AI-related incidents and building upon existing local efforts, could lower the barriers to reporting and heighten the incentives to do so. This prospective tool should be managed by a middleware entity with demonstrated technical capabilities to assess claims. It could catalyze cross-border collaboration and ensure consistent threat analysis. This database would provide a secure platform to discuss and evaluate emerging threats, enhancing global preparedness and serving as a measure to bolster confidence.

Technology Literacy: Entity or entities should be collaborating with educational institutions and leveraging freely accessible Creative Commons-licensed standards, materials, and platforms to promote AI literacy. This aligns with UNESCO's guidance initiative. Such initiatives emphasise the importance of investing in education, empowering individuals to effectively navigate the AI landscape and tap into its potential for both personal and societal betterment. All educational curricula should be adapted, with incentives and requirements introduced for companies to explain how their technology operates, how the systems are constructed, by whom, using which resources, and for what purpose. This ensures that people, especially children, organically perceive these systems as advanced yet flawed computational tools and engage with them accordingly.

Technology Passports: A “technology passport” system could streamline assessments across jurisdictions, allowing stakeholders to scrutinize a technology’s outcomes through its journey. Such a passport would be an evolving document in which AI systems accumulate “stamps,” signifying that expert review had verified the system’s adherence to predetermined criteria. Stamps would be given at key moments in the system’s evolutionary journey and value chain, such as when it integrates new data, connects to other systems, is decommissioned, or is applied to a new purpose.

International Standards Repository: Establishing a comprehensive repository, potentially managed by a middleware entity within an academic environment, that centralizes references to existing AI standards, elucidates disparities, evaluates the necessity for updates in light of the advancing technology landscape, illustrates practical applications, and tackles ethical concerns and conflicts of interest. This repository would bolster transparency, guide decision-making, and promote collaboration in safeguarding against ethically misguided applications, all while preventing standards influenced by conflicts of interest from supplanting robust governance. It would allow interested parties to navigate available resources and determine the best approaches for use cases. Importantly, this repository would be made freely accessible, even though some of the standards referenced may still be proprietary to the issuing standards organizations.

Open-source Sandboxes: Creating frameworks for open-source sandboxes or testbeds to enable developers and implementing agents to ethically and transparently test, validate, verify, and provide technical oversight for AI capabilities, whether they are human-machine or machine-machine interactions. These frameworks for sandboxes will be designed with open-source and reproducible solution architectures in mind. This open-source approach not only holds significant potential for applications within the field of AI but also addresses the unprecedented convergence of technologies that AI enables. If successful, many of the proposed components within these open-source sandboxes could serve as models for governing yet-to-be-anticipated fields of scientific discovery and technological innovation.

Technological Tools: Ongoing development of software and hardware tools, including cryptographic methods and security protocols, forms the foundation for creating strong, secure, and reliable systems to protect against cyber threats, data mishandling, unscrupulous data mining, and related vulnerabilities. This development can also enhance process efficiency, potentially resulting in substantial time and cost savings. Furthermore, to cultivate a culture of security that extends beyond technical approaches, it is important to encourage the sharing of knowledge and expertise among a wide array of global stakeholders. This collaborative effort can foster continuous refinement of tools that are free, accessible, and adapt to the ever-changing technological landscape.

Collaborative Policy Forums: A dedicated Assembly, akin to the State Parties Conferences often associated with international treaties, could function as a forum for reaching agreements regarding the prohibition or restriction of AI technologies and their applications in situations that pose undesirable risks with potential consequences for international stability and security. Such gatherings could strengthen of technology and AI governance, even when a formal treaty is absent. This would prove particularly valuable in scenarios where AI models and its downstream applications may conflict with established normative frameworks, arms control instruments, as well as principles related to politics, society, and human rights.

Declaration Portal: Establishing a "declaration portal," drawing inspiration from the arms control and non-proliferation regime, requiring state and corporate actors to disclose their AI developments, approaches, and deployments, would encourage transparency and adherence to globally agreed norms and standards, promote international technical cooperation and knowledge exchange, and serve as a confidence-building measure.

Global Certification: Develop global certification programs aimed at integrating universally agreed-upon ethical principles, including addressing tension points and trade-offs, into AI processes. Ideally, these certification programs should be conducted in collaboration with credible professional technical organizations, leveraging their demonstrated expertise in developing such programs to ensure that the certification process goes beyond theoretical concepts and provides practical solutions for addressing ethical considerations that are clearly defined in existing normative instruments addressing political, social, environmental, and human rights.

***

A global technolology and AI governance framework needs to be both flexible and agile. Proactively fostering dialogue and confidence building measures can make the implementation of a global framework more reliable and facilitate prompt responses to relevant issues. Thus, governing a swiftly evolving technology requires a comprehensive and federated approach, covering each technology's journey from its inception to obsolescence.

While corporate input is valuable for shaping any technology-related framework, maintaining an open-source, community-driven, and independent approach is essential for transparency. Any framework should clearly define accountability, specifying what is being developed, by whom, under which guidance, by which standards, and for what intended purpose. In doing so, it can prompt companies to both showcase their dedication to transparent, safe, and accountable AI deployment and foster broader stakeholder collaboration.

There are still numerous unresolved questions concerning navigating different regulatory landscapes, mitigating geopolitical tensions, and balancing various corporate interests. For instance, how will the proposed mechanisms be implemented and monitored? How can we safeguard the political autonomy, corporate independence, technical integrity, and the reliability of the individuals, institutions, and middleware entities influencing the development and use of these technologies? When political and technical disagreements arise, who will resolve them? Is there a role, for example, for the International Court of Justice to address legal disputes according to international laws and provide guidance on AI-related questions with transnational implications? Alternatively, is there a need to establish a separate judicial settlement body to address potential claims of harmful uses with global implications?

If we can find answers to these and other questions that will undoubtedly materialize as our use of these technologies evolves, a globally accepted framework for AI governance could serve as a steppingstone for governing future scientific and technological advancements, extending beyond AI.

About this proposal

This proposal builds on collaborative effort between the Artificial Intelligence & Equality Initiative (AIEI) at Carnegie Council for Ethics in International Affairs (CCEIA) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA). The proposal benefits from and builds on the expertise and experiences of vast numbers of brilliant people working in the field of AI and governance.

Established in 2020, the AIEI is a vibrant and result-driven community of practice dedicated to scrutinizing the impacts of AI on societal equality. With a committed goal to foster ethical integration and empowerment in AI advancements, it champions the development and deployment of AI technologies that are just, inclusive, and firmly rooted in pragmatic and responsible principles. This dynamic initiative brings together a globally representative Advisory Board, encompassing members from over 20 nations across every continent. These advisors are luminaries in their respective fields, hailing from academia, governmental bodies, multinational institutions, NGOs, and the business sector, blending technological insights with geopolitical expertise.

FOOTNOTE:

The Framework for the International Governance of AI proposed five symbiotic components:

(1.) A neutral technical organization charged with continuously assessing which legal frameworks, best practices, and standards are achieving the highest levels of acceptance globally.

(2.) A normative governance capability with limited enforcement powers to promote compliance with global standards for the ethical and responsible use of AI and related technologies.

(3.) A toolbox for organizations to assess and certify conformity with standards.

(4.) The ongoing development of AI-governance-supporting technological tools, that can assist with data relevant for decision making, validating and auditing existing systems, and mitigating risks where necessary.

(5.) Creation of a Global AI Observatory (GAIO), bridging the gap in understanding between scientists and policymakers and fulfilling the functions defined below that are not already being fulfilled by other institutions.


ADDITIONAL RESOURCES:

AI Red Team/Hack the Future: Redefining Red Teaming, July 2023

Association for Computer Machinery Statement on Generative AI, September 2023

Credo.ai: The Hacker Mindset: 4 Lessons for AI from DEF CON 31, August 2023

IEEE Statement on Generative AI, June 2023

You may also like

JUL 31, 2024 Podcast

Responsible AI & the Ethical Trade-offs of Large Models, with Sara Hooker

In this episode, Senior Fellow Anja Kaspersen speaks with Cohere for AI's Sara Hooker to discuss model design, model bias, and data representation.

JUL 24, 2024 Podcast

AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

Senior Fellow Anja Kaspersen speaks with Center for a New American Security’s Paul Scharre about emerging issues at the intersection of technology and warfare.

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.