Responsible AI & the Ethical Trade-offs of Large Models, with Sara Hooker

Jul 31, 2024 64 min listen

In this episode of the Artificial Intelligence & Equality podcast, Senior Fellow Anja Kaspersen speaks with Sara Hooker, head of Cohere for AI, to discuss her pioneering work on model design, model bias, and data representation. She highlights the importance of understanding the ethical trade-offs involved in building and using large models and addresses some of the complexities and responsibilities of modern AI development.

Responsible AI & Ethical Trade-offs AIEI Spotify link Responsible AI & Ethical Trade-offs AIEI Apple Podcast link

ANJA KASPERSEN: As artificial intelligence (AI) continues to reshape narratives and paradigms, I am pleased to welcome Sara Hooker. Sara is the founder, vice-president of research, and head of Cohere for AI, a research lab dedicated to solving complex machine learning (ML) problems and conducting fundamental research that explores the unknown. A link to Sara’s impressive bio, her significant body of research, and her podcast exploring the known and unknowns of machine learning can also be found in the transcript of the podcast.

Welcome, Sara. It is a pleasure to have this conversation with you.

SARA HOOKER: It’s lovely to be here.

ANJA KASPERSEN: Before we dive into the complexities—as you would say, the “known and unknowns of machine learning and AI”—and maybe dismantle a few misconceptions along the way, could you share what inspired you to enter this field? What pivotal moments or key influences brought you here and also to adopt this very specific outlook that I was just referring to?

SARA HOOKER: My career has always been driven by answering interesting questions. I am a computer scientist by training, but I would say a lot of even Cohere for AI is based on this idea of to be at the frontier of ideas.

Typically, you have to have different ways of looking at existing components, and Cohere for AI is a good example of that. We are a hybrid lab. We have a large industry lab that is very typical for how progress is made these days, especially building large language models (LLMs) where we have a lot of computer and full-time staff, but we also have a hybrid open-site component where we collaborate a lot with different experts across different fields. I have always been equally interested in what leads to breakthroughs and maybe what combinations of people and spaces lead to interesting ideas.

We released a multilingual model earlier this year that serves 101 languages, and that was almost entirely the product of almost 3000 researchers across the world working together. That is really interesting because that is something that is what I would call a very big science project where you have to have a common consensus amongst researchers that this is an important question.

You may also like

JUL 24, 2024 Podcast

AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

Senior Fellow Anja Kaspersen speaks with Center for a New American Security’s Paul Scharre about emerging issues at the intersection of technology and warfare.

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.

JUN 27, 2024 Podcast

AI, Military Ethics, & Being Alchemists of Meaning, with Heather M. Roff

Senior Fellow Anja Kaspersen and Heather Roff, senior research scientist at the The Center for Naval Analyses, discuss AI systems, military affairs, and much more.