The State of AI Safety in China, with Kwan Yee Ng & Brian Tse

May 9, 2024 59 min listen

AI safety and governance is much more advanced in China than is generally appreciated. The Chinese government and AI community are well-aware of the risks AI poses and are working to tackle them. International coordination is therefore quite possible.

In this Artificial Intelligence & Equality podcast, Carnegie-Uehiro Fellow Wendell Wallach discusses with Concordia AI's Kwan Yee Ng and Brian Tse how to build on the momentum from recent events such as the Bletchley Summit and the United Nations General Assembly AI resolution to establish global norms and standards for responsible AI development.

WENDELL WALLACH: Hello. I am Wendell Wallach. Today we are going to talk about the state of artificial intelligence (AI) safety in China, which is also the title of a report from Concordia AI, an independent Beijing-based organization focused on AI safety and governance.

We will talk about the report with two of Concordia’s leaders. First we have Brian Tse, who is the founder and CEO of Concordia AI. He is also a policy affiliate at the Centre for the Governance of AI and part of the consultative network for the UN’s High-Level Advisory Body on AI. Brian graduated from the University of Hong Kong. I have actually known him for many years as he appears in different AI ethics and safety forums in both China and all over the world.

Kwan Yee Ng is the senior program manager at Concordia AI. She is also part of the writing team for the International Scientific Report on Advanced AI Safety, chaired by Yoshua Bengio. Kwan Yee has a background in international relations and holds degrees from the London School of Economics and Peking University.

Welcome to both of you. Kwan Yee, as you survey AI safety in China, what do you believe will be the most surprising to the largely Western listeners to this podcast?

KWAN YEE NG: Thank you, Wendell. First off, I just want to thank you and your team for having us on this podcast.

Let me just take a moment here to set the scene. In 2023 after the launch of ChatGPT I think discussion on frontier AI safety and governance really took off, but there were some influential voices that framed AI development as a zero-sum competition between China and some Western countries, arguing that China does not care or is unlikely to act to reduce AI risks. This raises the question: Just how much does China actually care about the risks from frontier AI development, and will it take any meaningful actions to address these concerns?

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

JUL 31, 2024 Podcast

Responsible AI & the Ethical Trade-offs of Large Models, with Sara Hooker

In this episode, Senior Fellow Anja Kaspersen speaks with Cohere for AI's Sara Hooker to discuss model design, model bias, and data representation.

JUL 24, 2024 Podcast

AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

Senior Fellow Anja Kaspersen speaks with Center for a New American Security’s Paul Scharre about emerging issues at the intersection of technology and warfare.

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.