The Global AI Observatory (GAIO) was referenced by Dr. Adam Day, head of the Geneva office of United Nations University Centre for Policy Research, in his 2024 book The Forever Crisis: Adaptive Global Governance for an Era of Accelerating Complexity.
The authors of the following GAIO proposal are Professor Sir Geoff Mulgan, UCL; Professor Thomas Malone, MIT; Divya Siddharth and Saffron Huang, the Collective Intelligence Project, Oxford University; Joshua Tan, Executive Director, the Metagovernance Project; Lewis Hammond, Cooperative AI
Here we suggest a plausible, and complementary, step that the world could agree on now as a necessary condition for more serious regulation of AI in the future (the proposal draws on work with colleagues at UCL, MIT, Oxford, the Collective Intelligence Project, Metagov and the Cooperative AI Foundation as well as previous proposals).
A Global AI Observatory (GAIO) would provide the necessary facts and analysis to support decision-making. It would synthesize the science and evidence needed to support a diversity of governance responses and answer the great paradox of a field founded on data in which so little is known about what’s happening in AI, and what might lie ahead. Currently no institutions exist to advise the world, assessing and analysing both the risks and the opportunities, and much of the most important work is kept deliberately secret. GAIO would fill this gap.
The world already has a model in the Intergovernmental Panel on Climate Change (IPCC). Established in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information they can use to develop climate policies.
A comparable body for AI would provide a reliable basis of data, models, and interpretation to guide policy and broader decision-making about AI. A GAIO would have to be quite different from the IPCC in some respects, having to work far faster and in more iterative ways. But it would ideally, like the IPCC, work closely with governments providing them with the guidance they need to design laws and regulations.
At present, numerous bodies collect valuable AI-related metrics. Nation-states track developments within their borders; private enterprises gather relevant industry data; and organizations like the OECD’s Artificial Intelligence Policy Observatory focus on national AI policies and trends. There have also been attempts to map options for governance of more advanced AI, such as this from governance.ai. While these initiatives are a crucial beginning, there continues to be a gulf between how the scientists think about these issues, and how the public, governments, and politicians do. Moreover, much about AI remains opaque, often deliberately. Yet it is impossible to sensibly regulate what governments don’t understand.
GAIO could help fill this gap through six main areas of activity:
- The first is the creation of a global, standardized incident reporting database concentrating on critical interactions between AI systems and the real world. For example, in the domain of bio-risk, where AI could aid in creating dangerous pathogens, a structured framework for documenting incidents related to such risks could help mitigate threats. A centralized database would record essential details about specific incidents involving AI applications and their consequences in diverse settings – examining factors such as the system's purpose, use cases, and metadata about training and evaluation processes. Standardized incident reports could enable cross-border coordination, decreasing the odds and mitigating the potential effects of miscommunication within a likely arms race over AI—with consequences as dire as the arms race over nuclear weapons.
- Secondly, GAIO would assemble a registry of crucial AI systems focused on AI applications with the largest social and economic impacts, as measured by numbers of people affected, person-hours of interaction, and the stakes of their effects to track their potential consequences. It would ideally also set rules for providing access to models – to allow for scrutiny. Singapore already has a registry and the UK government is considering something similar within the country: but at some point similar approaches need to become global.
- Thirdly, GAIO would bring together a shared body of data and analysis of the key facts of AI; spending, geography, key fields, uses, applications (there are many sources for these, but they are still not brought together in easily accessible forms, and much about investment remains very opaque).
- Fourth, GAIO would bring together global knowledge about the impacts of AI on critical areas through working groups covering topics such as labor markets, education, media, and healthcare. These would orchestrate the gathering of data, interpretation, and forecasting, for example of potential effects of LLMs on jobs and skills. GAIO would also include metrics for both positive and negative impacts of AI, such as the economic value created by AI products along with AI-enabled social media impact on mental health and political polarization.
- Fifth, GAIO could offer options for regulation and policy for national governments as well as potentially legislative assistance (drawing on lessons from Co-develop promoting DPI and IAEA), providing model laws and rules that could be adapted to different contexts.
- Lastly, GAIO would orchestrate global debate through an annual report on the state of AI that analyzes key issues, patterns that arise, and choices governments and international organizations need to consider. This would involve a rolling program of predictions and scenarios focused primarily on technologies likely to go live in the succeeding two to three years, and could build on existing efforts such as the AI Index produced by Stanford University.
GAIO would also need to innovate. As indicated, it would need to act far faster than the IPCC, attempting quick assessments of new developments. Crucially, it could use collective intelligence methods to bring together inputs from thousands of scientists and citizens, which is essential in tracking emergent capabilities in a fast-moving and complex field. In addition, it could introduce whistleblowing methods similar to the US government's generous incentives for employees to report on harmful or illegal actions.
To succeed, GAIO would need a comparable legitimacy to the IPCC. This can be achieved through its members including governments, scientific bodies, and universities, among others, and by ensuring a sharp focus on facts and analysis more than prescription, which would be left in the hands of governments. It would ideally have formal links to other bodies with a clear role in this space – ITU, IEEE, UNESCO and the International Science Council. It should aim to collaborate as closely as possible with others already doing excellent work in this space, from the OECD to academic centers.
Contributors to the work of GAIO would be selected, as with the IPCC, on the basis of nominations by member organizations to ensure depth of expertise, disciplinary diversity, and global representativeness, along with maximum transparency to minimize both real and perceived conflicts of interest.
The AI community and businesses using AI tend to be suspicious of government involvement, often viewing it solely as a source of restrictions. But the age of self-governance is now over. What’s proposed here is an organization that exists in part for governments, but with the primary work done by scientists, drawing on successful attempts to govern many other technologies, from human fertilisation and cloning to biological and nuclear weapons.
In recent years the UN system has struggled to cope with the rising influence of digital technologies. It has created many committees and panels, often with grand titles, but generally with little effect. The greatest risk now is that there will be multiple unconnected efforts, none of which achieve sufficient traction. The media and politicians have been easily distracted by wild claims of existential risk, and few feel confident to challenge the major corporates, especially when they are threatened with the prospect of their citizens being cut off from the benefits of Open AI or Google.
So, legitimating a new body will not be easy. GAIO will need to convince key players from the US, China, the UK, the EU, and India, among others, that it will fill a vital gap, and will need to persuade the major businesses that their attempts at controlling the agenda, without any pooling of global knowledge and assessment, are unlikely to survive for long. The fundamental case for its creation is that no country will benefit from out-of-control AI, just as no country benefits from out-of-control pathogens.
How nations respond is bound to vary. China for example recently proposed to ban LLMs with “any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.” The US, by contrast, is likely to want the maximum freedom.
But shared knowledge and analysis is the necessary condition for nations to be able to decide their own priorities. Unmanaged artificial intelligence threatens the infrastructures and information spaces we all need to think, act, and thrive. Pooling knowledge in intelligent ways is the vital start on the way to better harnessing the benefits of artificial intelligence and avoiding the dangers.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the authors and do not necessarily reflect the position of Carnegie Council.