Two Core Issues in the Governance of AI, with Elizabeth Seger

Mar 22, 2024 56 min listen

Which is more dangerous, open source AI or large language models and other forms of generative AI totally controlled by an oligopoly of corporations? Will open access to building generative AI models make AI more democratic? What other approaches to ensuring generative AI is safe and democratic are available?

Carnegie-Uehiro Fellow Wendell Wallach and Elizabeth Seger, director of the CASM digital policy research hub at Demos, discuss these questions and more in this Artificial Intelligence & Equality podcast.

For more from Seger, read her recent article on AI democratization.

Two Core Issues in AI Governance Spotify podcast link Two Core AI Governance Issues AIEI Apple podcast link

WENDELL WALLACH: Hello. I’m Wendell Wallach. My guest for this podcast is Elizabeth Seger, the director of the digital policy team at Demos, which is a cross-party political think tank in London. She is also a research affiliate at the AI: Futures and Responsibility project at the University of Cambridge. What I find fascinating about Elizabeth is that she has become a thought leader on two of the most critical issues in the governance of artificial intelligence (AI), issues that will help determine whether AI might enhance equality or exacerbate structural inequalities. The first of these issues is whether large AI models, particularly generative AI, are potentially too dangerous to be the AI equivalent of open-source software.

Welcome, Elizabeth.

ELIZABETH SEGER: Thank you for having me.

WENDELL WALLACH: It is wonderful to have you. Please clarify for us why the question of whether AI, large language models (LLM), and other frameworks should be controlled by large corporations or open source is such a controversial issue.

ELIZABETH SEGER: The open-source debate around large language models and highly capable AI has become such a big deal and so controversial because it is a huge control question. It is a high-stakes control question.

On the one hand, not open sourcing a model or allowing large corporations to have sole control puts the control of these models in the hands of a very few large tech companies, and doing this could prevent other players from being able to be involved in what is promising to be a very financially lucrative and beneficial AI industry.

On the other hand, open sourcing of systems could put control of these systems in the hands of the wrong people, and there is a lot of concern around the extent of damage that could done by putting highly capable AI systems in the hands of malicious actors.

So we have a massive control problem on both sides and high stakes if we get the question wrong.

WENDELL WALLACH: I have heard you say that you think this distinction between open source and corporate control is really a false dichotomy. Please clarify.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

JUL 31, 2024 Podcast

Responsible AI & the Ethical Trade-offs of Large Models, with Sara Hooker

In this episode, Senior Fellow Anja Kaspersen speaks with Cohere for AI's Sara Hooker to discuss model design, model bias, and data representation.

JUL 24, 2024 Podcast

AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

Senior Fellow Anja Kaspersen speaks with Center for a New American Security’s Paul Scharre about emerging issues at the intersection of technology and warfare.

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.