In the governance of AI a few small initiatives have had a large impact. One of these is the Center for AI and Digital Policy (CAIDP), led by Marc Rotenberg and Merve Hickok, our guests in this Artificial Intelligence & Equality podcast.
Among CAIDP activities is the yearly publication of an Artificial Intelligence and Democratic Index, in which more 75 countries (as of 2022) are rated on an array of metrics from endorsement of the OECD/G20 AI Principles to the creation of independent agencies to implement AI policies. Furthermore, the CAIDP staff and collaborators have been involved in and helped shape most of the major AI policy initiatives to date.
WENDELL WALLACH: Hello. I am Wendell Wallach. Many listeners will be aware that Carnegie Council for Ethics in International Affairs has had a longstanding interest and involvement in the ethics of emerging technologies and their governance, particularly artificial intelligence (AI). That is why I am so pleased to be hosting this podcast highlighting the work of the Center for AI and Digital Policy, also known as CAIDP, a small initiative which few of you may have heard of but is nevertheless having an outsized impact. Today we have two leaders of CAIDP, Marc Rotenberg, its founder and executive director, and Merve Hickok, its president and research director. One piece of recent evidence that underscores CAIDP’s influence is the Federal Trade Commission’s (FTC) announcement in mid-July about an investigation of OpenAI.
Merve, for our listeners who are unfamiliar with the FTC, what is it, and what did they announce?
MERVE HICKOK: Thank you, Wendell, and thanks for having us on this podcast. We really appreciate it, and it is good to see you again.
The Federal Trade Commission is the federal consumer protection agency in the United States regulating and overseeing fair trade business practices. Also, in the absence of a privacy regulator in the United States, it seeks to regulate and oversee this space as well. Not only for AI or for digital practices, but at a high level it is the consumer protection agency.
WENDELL WALLACH: What did they announce?
MERVE HICKOK: We were very happy that they recently announced an investigation into OpenAI’s business practices as well as their data practices, policies, and procedures. We can go into further detail, but we have been working on this and demanding this investigation for a few months now. What they are seeking with that investigation is policies and procedures regarding the OpenAI stated practices; model development; how they audit and moderate outputs; and what they have done with regard to bias, transparency, privacy, safety, and deception risks.
WENDELL WALLACH: Tell me a little bit more about what CAIDP did in March and early July to prompt the FTC to take this action.
MARC ROTENBERG: In March when there was a lot of focus on the release of GPT-3.5, we became aware of concerns that had been raised that Merve described from public safety and consumer protection on through cybersecurity and misinformation. I had success in the past bringing complaints to the Federal Trade Commission involving Google and Facebook, and this seemed like a good opportunity to put the matter before the FTC.
Merve had just testified before a House committee about the need to establish AI guardrails, so we filed the complaint at the end of March and then carried forward a campaign actually entitled #OpenTheInvestigation urging the FTC to begin the investigation we had proposed. One of our staff members here in Washington attended an open meeting of the Commissioners, reminded them of our complaint, asked them to take action, and just about a week before word leaked that the FTC had opened the investigation we filed a detailed supplement. Altogether we had over 80 pages of filings for the Federal Trade Commission outlining the concerns about ChatGPT and what we thought the FTC should do.
WENDELL WALLACH: I think it is difficult for us to know who else may have prompted the FTC to act, but at least for people like myself yours was the most visible, and I was highly appreciative that you had not only taken this initiative, but it looked to me that that had been seminal in moving forward on this.
Merve, can you tell us a little bit more about the Center for AI and Digital Policy, what it is, and what some of your activities are?
MERVE HICKOK: How much time do we have?
We are an independent nonprofit research and education organization. We are incorporated in DC, however, we operate on a global level. We can summarize our activities under research and education, advocacy, and advisory.
On the research and education side we run semester-long AI policy clinics. In September we are about to start the sixth semester, so we have been doing this for a while. We have been developing and scaling up future AI policy leaders across the world, and we have seen participants from more than 60 countries graduate from our clinics.
We also do our flagship research publication, AI and Democratic Values index, where we have been for three years now looking at the national AI strategies and practices of 75 countries. It stands as one of the most comprehensive and comparative analyses of AI policy.
We advise national, federal, international, and supranational organizations, anything from the European Union, the United Nations, the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the Organisation for Economic Cooperation and Development (OECD), and the Global Partnership on Artificial Intelligence, as well as some federal and national agencies as well.
WENDELL WALLACH: Marc, when you founded CAIDP you had already been very involved in policymaking on the digital front in Washington, and I wondered what was it that led you to want to start this new body, and what did you hope it would achieve?
MARC ROTENBERG: Wendell, I have been in the field of AI and policy and work I am almost embarrassed to say for more than 40 years. Before I became a lawyer—one of the few lawyers in our nation’s capital, I might add—I was a computer programmer and a chess player, and so I was very interested in the problem of how we write programs to play chess and play backgammon, and I was involved in some of the early developments. I actually lectured on AI in programming when Jimmy Carter was president. That is how far back that all goes.
To carry forward, the issue of AI as a digital policy issue has been ripe I would say for many, many years. It is only in the last year perhaps that the public has become engaged because of the widespread availability of generative AI products such as ChatGPT. What seemed most important to me was to establish a system of democratic governance for this new technology, and with the establishment of the CAIDP we are trying to promote democratic governance, we are trying to promote the rule of law, and ensure the protection of fundamental rights. It has been very exciting I would say with the support of Merve and others to see the rapid development of the field and the great interest we have received at the Center for this work.
WENDELL WALLACH: Merve, is CAIDP more focused on domestic U.S. policy or on international policy?
MERVE HICKOK: We of course as a global nonprofit and global research and educational organization are involved on both ends, so we have been involved obviously with OECD and UNESCO work for a long time, and in the last two to three years we have been heavily involved in the EU AI Act conversations on the advocacy and advisory end as well as the Council of Europe’s AI Treaty work for almost three years now.
However, in the last few months U.S. AI policy has picked up significantly, so you are seeing us putting out advisories and recommendations and statements more frequently on the U.S. side. As Marc mentioned, I had the opportunity to testify in the first congressional hearing on whether the United States is ready for AI technology or not and how it should proceed.
I think the United States is right now picking up speed on AI policy and regulations. Other regions have been more involved with this. At the CAIDP we have been more involved in international policy. However, there is definitely a need for the United States to catch up and hopefully lead these efforts, so you see us acting on both fronts.
WENDELL WALLACH: You mentioned earlier your major initiative, the AI and Democratic Values index. I would appreciate it—and I am sure our listeners would also—if you could tell us a little bit more about what that is and what you have discovered so far.
MERVE HICKOK: Absolutely. The AI and Democratic Values index is also one of the founding activities of the center. We started this as Marc mentioned in an effort to assess national AI policies, strategies, and what countries are committing to and what they are actually doing on the ground.
We have established 12 objective metrics, and our effort was to create this comparative and longitudinal analysis of where countries stand against these 12 metrics and how they change over time. In April of this year we published the third edition, looking at 75 countries—their priorities and their national AI strategies; and their commitments and implementation of the Universal Declaration of Human Rights, OECD AI Principles, UNESCO’s “Recommendation on the Ethics of Artificial Intelligence” (AI Recommendation); as well as some of the convergence on AI governance norms such as transparency, fairness, accountability, et cetera.
It is a pretty extensive work, and we have been blessed with the engagement of not only our global academic network but also our AI policy clinic participants, so more than 300 people either contributed to the research or, as you have been one of the supporters and reviewers of this work, to do peer review and feedback process, so it is not only us saying, “This is where we stand,” but it is a truly global effort and organization. The AI and Democratic Values index has so far been heavily used by policymakers and stands as the most extensive original analysis.
WENDELL WALLACH: Do you mind getting a little bit into the weeds with us here? I know you are looking at this more as a means to catalyze countries to take up these concerns that you are evaluating them on, but I think it is also seen as a ratings system, and I wondered if you could give us a sense of those 70-odd countries that you are evaluating. What percentage of them get high marks and what percentage of them still have a lot to demonstrate?
MERVE HICKOK: Every country, as I mentioned, is measured against these 12 metrics and they get certain scores evidenced by their practices and evidenced by the public documentation, and then those scores allow us to rank each country. Then every country is split into five tiers, depending on where they are with those scores. There is a nice Bell curve spread as well, interestingly enough. Within those, as you mentioned, we are trying to catalyze change and advancement through these metrics, hold the countries accountable for their actual practices, and showcase areas of development as well as where else they can improve.
The countries at the very top in Tier 1 are actually the ones who implement what they committed to. That stands as one of the biggest differentiators between the countries.
WENDELL WALLACH: Tell us who some of those top-tier countries are.
MERVE HICKOK: Of course. The top-tier countries right now are Canada, Germany, and South Korea. They commit, for example, to transparency, and you see they have public consultations and all their AI policy is very transparently developed and managed as well as public commentary included, you see them actually implementing human rights-respecting practices, and you see them implementing accountability mechanisms for governments.
As we go down the tiers there is a gap between commitment and practice, actual implementation of items, so you might see a country commit to human rights but then heavily surveil their population or use mass surveillance or things like predictive policing, which do not align with those human rights-respecting commitments. We try to make sure that we can surface those practices and drive change.
WENDELL WALLACH: Given that the greatest body of our listeners is going to be from the United States and the European Union, what tiers do they fall into?
MERVE HICKOK: As an independent organization doing this, we are very careful that we are not favoring or disfavoring any country and stick to our metrics. The United States sits in the middle. Over the years we have seen changes in U.S. policy, first more activity happening in U.S. policy but also changes in transparency of how these activities are happening. For example, the first year of the report we were very concerned about some behind-the-door activities happening with, for example, the National Security Commission on AI and a number of activities and advocacy groups who make the AI policy work in the United States more transparent.
That changed in the second year, for example, in some of the Open Supervised Device Protocol public consultations, but this year we are seeing again behind-closed-door activities and meetings in U.S. policy, closed-door meetings with CEOs, and closed-door briefings with senators. We try to keep on top of these changes and activities to make sure that our ratings are evidenced and reflect actual work.
WENDELL WALLACH: Marc, you have talked about working with the European Union on the AI Act, and I wondered, does the ratings system enter into that conversation? What is the emphasis of your consultation with the European Union?
MARC ROTENBERG: I think our ratings system and more generally the report AI and Democratic Values has been very influential in our discussions with members of the European Parliament, the Commission, the Council, and others. Our aim in creating the methodology was to establish objective metrics that would allow countries to talk about AI policy objectives as measured against—we use the phrase “democratic values with a small d,” but we have in mind such important principles as Merve described, including transparency, fairness, and opportunities for public participation.
It gives us the ability, for example, to criticize the United States—as Merve mentioned, when an AI commission is set up in secret we think that is contrary to democratic values—and also to acknowledge when the United States creates a more transparent process for public participation.
It is very much an issue right now, for example, with Senator Schumer, who has proposed a new method for gathering expertise for the U.S. Congress based on closed-door briefings. We can go back to our methodology and say, as we have said now for several years in our review of all countries’ policies and practices, that we simply disfavor closed-door briefings. We think democratic governments should have an open opportunity for participation. We think transparency is necessary not only for accountability of algorithms but also for governments. So I would say, yes, the methodology as well as the narrative reports give us a very effective way to talk about positive and negative developments.
WENDELL WALLACH: You said “democratic values with a small d,” and I wonder how you understand that when you are looking at countries that are not democracies but may in some respects honor what we might consider democratic values and whether this emphasis on democratic values is problematic in terms of your ratings system being seen as universal.
MARC ROTENBERG: In fact I think—I don’t know whether it is an achievement or an insight—our methodology has worked remarkably well looking at both so-called “democratic” nations and so-called “non-democratic” nations because we can recognize when democratic nations fail to uphold the values we would associate with a democratic country, such as public participation in decision making, and we will also acknowledge in non-democratic countries when efforts are made to safeguard fundamental rights, to endorse the OECD AI Principles, or to put in place legal standards for algorithmic transparency. So the methodology has actually worked remarkably well across all governments, giving us the ability to, I would say fairly and objectively, assess national AI policies and practices.
WENDELL WALLACH: Please, Merve, jump in.
MERVE HICKOK: Just to expand on your question about the European Union and European countries, as I mentioned we have been involved with the EU AI Act for multiple years now at this point and have also been involved with the committees leading this effort and partnering with civil society organizations in the European Union, trying to drive the EU AI Act to be more rights-respecting rather than how it started as more for, say, product safety regulations. We have been pushing, advocating, and advising toward that as well.
When you look at European countries as Marc mentioned, we are able to keep an objective eye to the practices of these countries, so when you go deeper into individual country reports in the index for those countries in Europe you will see that the practices of the national governments make the difference in the ratings. We are seeing issues with how individual governments use AI and predictive techniques or surveillance techniques, border control, for detection systems for minorities in the country, or surveillance and population control in those countries. Just because a country is in Europe does not automatically make it high in the ratings. It is really how these countries implement rights-respecting policies as well as transparency and public engagement.
WENDELL WALLACH: Great. That is very helpful.
I am going to pivot just a little bit and come back to some of your history. Marc, you alluded to how long you have been in this field of digital governance. Can you tell us a bit more about your history and some of the activities that you have been involved in going back beyond the Carter administration at least?
MARC ROTENBERG: I have been involved in a lot. At one stage I was helping nonprofits in Washington, DC set up microcomputers. That was fun. At another stage I was working for the United States Senate on law and technology. I did work in the privacy field and open government. I also was a founding board member of the organization that actually manages the .org domain and later served as chair. I am very proud of that work because a big part of the .org was about promoting the noncommercial use of the internet.
Also with Computer Professionals for Social Responsibility (CPSR), which I have been thinking about a lot recently, we were very much involved in the early days of AI policy. I actually would credit CPSR—and this is about 40 years ago—for first calling attention to the risks of automated decision-making, particularly in the weapons context. This debate today about how systems can take control and produce what is described today as “existential risk” but a number of years ago was simply “automated warfare” is a longstanding concern, and I am glad to see it come back into the policy world. I hope we will see concrete action.
I also hope we will see concrete action on the immediate problems of algorithmic bias because of course we are all so dependent on automated decision making today, and so many of these systems reflect social bias that needs to be addressed and corrected.
Yes, a lot of years working on these issues, but as we talk about it I think most probably about CPSR. They really were at the vanguard.
MERVE HICKOK: Wendell, can I just say that it is a great thing for me. I am honored and excited to have been working with Marc for a long time. On any given topic he either has a story or an article that he has written—or both, depending on the stuff. It is fun and exciting to be working with someone who has such extensive knowledge inside and out as well as a network.
WENDELL WALLACH: I want to say the same thing, having worked with Marc on various projects but also in that we are having you on this podcast today. I think people get a little caught up in who is in the public eye, but there are people like yourself who have truly been burning the midnight oil for years trying to anticipate the challenges and get our society out front as opposed to waiting to react, which sometimes is too late for us to engage.
MERVE HICKOK: I have literally lost track of the number of people and organizations who upon knowing that I work with Marc tell me their own stories of how he helped them either get established or become more visible, to your point, working behind the scenes but very passionate about the cause.
WENDELL WALLACH: I hope sometime, maybe 20 or 30 years hence, people will go back and write a history of this period—and hopefully it is a positive history—of how we changed the trajectory in the deployment of AI. If it gets written, Marc, you certainly deserve more than a paragraph, but we will see how that unfolds.
MARC ROTENBERG: I want to say first of all thank you both for the kind words, but when that history is written I hope it is written by an actual person because a lot of that seems to be in debate right now. I am more trusting I think of people than I am of the machines.
WENDELL WALLACH: If it is written by a machine rather than a person, we can be sure that the people who are more prominent in the press are going to be the ones who did everything and those who worked long hours are perhaps overlooked, but that is part of the problem we have with machines these days.
Both of you were talking about your work with the OECD and for UNESCO. I would like to hear a bit more because I do know how seminal you have been in initiatives by both of those organizations. Marc, also you often stress the OECD Principles rather than the UNESCO principles. They are pretty similar and overlap quite heavily, and I wondered why you often give, what shall I say, prior credit to the OECD?
MARC ROTENBERG: Part of it is chronological but let me say a few words about the UNESCO Recommendation on AI Ethics. I am a big fan, and in fact we have adjusted our methodology in our AI and Democratic Values report to recognize first, countries that have endorsed the UNESCO Recommendations and second, countries that have implemented the UNESCO Recommendation on AI Ethics. Those are two favorable indicators, and a lot of credit should go to UNESCO for developing global support I would say at the moment for the most comprehensive approach to AI ethics and regulation we have seen to date.
Also, I have worked with the OECD for more than 30 years. I am a big fan of the organization. I think they have done a very good job of setting out frameworks, not regulatory but let’s say principle-based frameworks, particularly for the digital economy in such areas as consumer protection, computer security, encryption, and of course the very famous OECD Privacy Guidelines of 1980, which literally became the foundation for many national privacy laws and international agreements, arguably one of the most influential policy frameworks from anyone at any time.
If I can say one more word on this, because I did work as an expert in the drafting of the 2019 OECD AI Principles, which I think were very good but we might also say were more limited than what was required. They did get onboard 50 countries, including G20 countries, and that is a remarkable achievement, but they were a bit reluctant to tackle some of the hard AI problems that were emerging, particularly the need to establish prohibitions on certain AI technologies.
Around the same time we were doing the work on the OECD AI Principles, I was also working with people such as Dr. Lorraine Kisselburgh to help draft what we called the “Universal Guidelines for AI.” The Universal Guidelines were intended to cover a lot of the ground that the OECD AI Principles simply did not reach. I am very proud looking back now almost five years on the Universal Guidelines. I actually think that framework may be one of the most important frameworks going forward, and we plan in fact this year to celebrate the fifth anniversary of the Universal Guidelines.
You have then the UNESCO Recommendation on AI Ethics adopted in 2021, comprehensive, with 193 countries behind it, very important; the OECD AI Principles of 2019, the first global framework for the governance of AI, endorsed by 50 countries; and the Universal Guidelines for AI, which I think will continue to provide guidance to policymakers going forward.
WENDELL WALLACH: A number of people have suggested that most of the generative AI applications that have been released since November do not really pass muster, either under the OECD Principles, the UNESCO principles, or the Guidelines. I wonder how you feel about that.
MARC ROTENBERG: It is an interesting comment. I am not sure if it is accurate, and I have been involved with all three. Let me say that generative AI was not anticipated in the OECD AI Principles nor really in the UNESCO Recommendation.
I will say about the Universal Guidelines that I am not sure if we were aware of generative AI, but we do have in place several principles that are clearly relevant. There is, for example, in the Universal Guidelines a termination obligation, so many of the people today who are concerned about generative AI and existential risk could actually look back at the Universal Guidelines and say precisely that if it is a matter of loss of human control, then whoever has deployed this system has an affirmative obligation to take it down. We thought that was very straightforward.
There are also principles in the Universal Guidelines concerning data accuracy, data provenance, and fairness that get to issues related to bias, copyright, and even cybersecurity I think. So there is a lot in the Universal Guidelines.
I know Merve has been working closely with the European Parliament on the EU AI Act and may also be able to say a few words about how the European Parliament addressed generative AI, which emerged actually somewhat late in the process of drafting the Act.
MERVE HICKOK: Let me start with the overall concerns with generative AI and then drill down to the European Parliament and EU AI Act. Some of these concerns we have included in our extensive comments in the FTC complaint. However, the FTC complaint also only relates to the FTC’s enforcement powers. It does not, for example, include things like copyright issues because it does not fall under that remit.
When we look at generative AI, not so much the systems but the practices of the organizations that are developing and deploying these systems, and how they actually respect some of these norms and governance structures, first and foremost among the concerns is the opaqueness of the business practices, which includes curation of data sets, development of models, and safety precautions that have been taken throughout model development and maintenance.
As it stands we do not have, for example, much of an idea about the data set provenance, the size, or what has been done to process the data sets, for example, GPT-4, but that is not the only product that is problematic. So we have this opaqueness that is preventing researchers as well as regulators getting a better understanding of what is happening.
Representation within the data sets and the bias that it would cause is a known problem with AI and is being magnified with generative AI systems, and we have already seen biased results, whether in language models or image generators.
There is concern about privacy and data protection issues, again from the training data sets and development piece. We do not know what kind of private, confidential, proprietary, or copyrighted information has gone into these systems to train them, but also there are a lot of privacy and data protection processes and mechanisms that are missing from the current products. Unless you opt out the default is your prompts are used to train the system. We have seen issues where personal information and personal prompts have popped up in other users’ results, so privacy and data protection is still not settled.
Obviously there is the issue of accuracy—I am not comfortable with the word “hallucination,” I prefer to use “inaccuracy”—as a result of both bias and actual function of the system, that when you try to predict the next word in a thread you are using word embeddings and probability and you are fine-tuning the parameters, and that always results in inaccuracy, so these generative AI systems should not be taking those as ground or objective truths.
There are a lot of cybersecurity and public safety concerns that we have, not only with generative AI systems being used to develop malicious codes or to inject malicious codes but also how these systems can provide detailed advice on things that can put public safety in danger. In fact if you go to OpenAI’s webpage and look at “disallowed usage,” there is a very, very long list of possibilities. Acknowledging possible harms and possible risks is one thing, and we appreciate that, but how do you go about governing those risks or putting guardrails in place? That is a key piece.
Consumer protection is obviously another piece. You will have a number of organizations who are using these products, paying for these products, and building models on top of them, but they do not have any control about the main model nor do they have any access to governance, so they are building on opaque data sets and models and taking on all the downstream risks.
Finally, one of the biggest challenges for society at large is disinformation and what these systems can do to democracy and democratic values as well as loss of trust in institutions and democracy at large.
WENDELL WALLACH: I appreciate your going through that list because I think for many in our audience you see one issue or another come up in some abbreviated newspaper article but not an appreciation for the breadth of concerns that are coming into play, and of course that is not an exhaustive list but just one that underscores some of the most serious issues.
I gathered from what you said that those concerns do have the attention of the European Union, but what about the U.S. government? Do you have a sense that the U.S. government is moving in any way to effectively address these concerns, or is it still caught up in political issues and perhaps corporate capture in a way where many of these issues may not get addressed at all?
MERVE HICKOK: I think it has definitely captured the attention of U.S. policymakers, from the White House to Congress. That is for sure, and you can see it with the flurry of AI policy activity and hearings on both on the Senate and House sides.
However there is a bit of hype, and we would still like to see more diverse expertise heard in these hearings and a more diverse group of people contribute to these discussions and policymaking, as we mentioned at the beginning of the conversation. You see more CEOs either testifying or going into closed-door meetings. We would like to see some of the experts who have been building governance models contribute to the conversation, and we would like to see people who are impacted by these systems contribute to the conversations and be invited to congressional hearings or expert groups.
It has definitely caught the attention of policymakers. The White House created the President’s Council of Advisors on Science and Technology, which is the generative AI policy group. We see a number of generative AI-specific hearings and task forces being built.
The concern, on top of everything that I have said about AI generative systems, is as you mentioned that we do not want this policymaking to be captured by corporations. It has to reflect the concerns of both existing AI systems as well as what generative AI has brought additionally to the table.
WENDELL WALLACH: Marc, let me put to you the question I get quite often, which is: From what you have seen take place so far, are you generally optimistic or pessimistic that the world governments are going to move forward and effectively regulate artificial intelligence?
MARC ROTENBERG: Wendell, I have often been asked that question, and I know you have philosophy in your background, so I answer with Pascal’s wager, which is basically that it is better to be an optimist because the alternative is too grim to consider.
I have seen over time that there is progress made. It can be slow and sometimes there are detours and sometimes setbacks, but just in the field of AI policy in the last few years—for example, working at the OECD even five years ago it would have been impossible to imagine that UNESCO could have put together the comprehensive Recommendation that they did, which was endorsed essentially by every country in the world. That was a remarkable achievement. The big question now is, what steps will be taken to implement that?
On the U.S. side for a couple of years we felt we were wandering in the desert. There was to be sure good work underway in the White House through executive orders—across multiple administrations, by the way—but not much in the way of public participation, engagement, or any real prospects for legislation.
There has been a dramatic change over the last six months under the leadership of this administration that I see as very favorable, but of course oftentimes in the policy world one of the biggest challenges you face is obtaining the goals you have set out. Even with a lot of support behind you, you need to maintain a clear focus on the outcomes you are seeking, and much of our work these days at the Center has been trying to ensure that we maintain our focus even as the public becomes more aware of AI and as policymakers become more willing to consider legislation.
WENDELL WALLACH: The good news is that it seems we do have an inflection point now. This generative AI moment has really created an opportunity where it has captured the attention of leaders, and we will see how much effective action we can precipitate out of that.
Merve, when we look at your AI and Democratic Values index, that seems to be focused very specifically on the policies of individual governments, and I am wondering to what extent is the international governance of AI on the radar for CAIDP?
MERVE HICKOK: It is very much on the radar in the AI and Democratic Values index as well as our AI policy clinics where we educate participants on major AI policy frameworks. We would like to make sure that current and future AI policymakers in this field understand what has happened in the past, what are the existing commitments of countries, such as OECD AI Principles, UNESCO recommendations at this point, and G7 and G20 commitments, and not keep trying to invent the wheel again and again with every hype cycle.
For example, the United States has been one of the leading countries for the OECD AI Principles, but we would like to see more implementation. The United States recently to our great excitement came back to UNESCO, so it will be crucial to see if the United States actually implements the UNESCO AI Recommendations in its future policymaking.
We would like to see the major frameworks be more aligned. Similarly I have been very much involved through CAIDP as an official observer at the Council of Europe AI Treaty. We have been very much involved in these conversations. The United States is an observer state to the Council of Europe, and it is involved in the Treaty conversations as well, so we would like to see prior commitments be reflected in the Treaty.
As well, in the AI and Democratic Values index we continue to reflect updates in these AI policy frameworks, so it is not only the individual countries but these main frameworks at large. But like I said at the end of the day this work needs to be aligned. We cannot have disparate and sometimes conflicting frameworks that apply to the same technologies and companies.
WENDELL WALLACH: Many of our listeners are going to be aware that Carnegie Council together with the Institute of Electrical and Electronic Engineers (IEEE) Standards Association put out a framework for the international governance of AI, a very abbreviated framework about two weeks ago, which has been circulated broadly within the UN system and beyond.
Marc, you had a chance to review at least an early draft of that. I know you were particularly interested in one of the proposed components, which was the creation of a robust AI observatory. I wonder if you could tell us a little bit more about what you think is needed there, what steps we might take forward, and why, given that there is also an AI observatory within the OECD you nevertheless want to stress the importance of something a little bit more robust on the world stage.
MARC ROTENBERG: I thought it was a good proposal from Carnegie Council to create this global observatory based at the United Nations. I think it is clear at this point that the United Nations under the leadership of the secretary-general does intend to establish a global commission.
The key question always in the development of these institutions and these frameworks is, how do we maintain complementary roles for the different organizations so that there is not duplication or conflict? We know that the European Union will establish comprehensive legislation through the EU AI Act, which is likely to replicate the Brussels effect of the General Data Protection Regulation. We know that we have the principle-based approach at the OECD and UNESCO and we look forward to the AI Treaty at the Council of Europe, but I do believe a missing piece that could be provided by the United Nations is the global observatory that you, Wendell, and others have proposed. Listening to the words of the secretary-general last week at the Security Council meeting, he talked about peace and security, which are fundamental to the United Nations, but he also talked about fairness, accuracy, and accountability in the context of AI. I am hopeful that this proposal moves forward. I think it would be an important addition to the governance of AI, and I am looking forward to news.
WENDELL WALLACH: To be fair, we are not the only people talking about that. Indeed those recommendations came out of two workshops we convened at UNESCO and at the International Telecommunications Union, and among the participants was one member of CAIDP, so you were represented in those conversations. Also in those conversations was Sir Geoff Mulgan, who convened a separate group that went deeply into thinking through what a Global AI Observatory (GAIO) might look like. For those of you who are interested further, if you go to carnegiecouncil.org there is both the framework and a piece by Geoff and his committee that talks a little bit more about the observatory idea.
To bring in just one more element on that, I think we all would love to see something within the UN system to facilitate communication, cooperation, and a degree of coordination, but it is clear already that the governance of AI is going to be distributed across many institutions, so this is more about communication, cooperation, and coordination function. Perhaps, for example, your index might be considered one of the elements or one of the tasks that is already being performed that does not have to be reinvented within another institution but can continue to go on. We think that there are many other tasks that the IEEE, UNESCO, or the EU may take up.
Again, nobody is trying to put forward models of what has to take place. I think we are all trying to underscore what needs to be attended to and trying to look for the most constructive ways to attend to those concerns.
Merve, is there something else that you would like us to be talking about before we end this podcast, something you would like to underscore that perhaps you think did not get its due in our earlier discussion?
MERVE HICKOK: First of all, I appreciate the whole discussion, but I think one thing I would like us to leave with is the true global nature and meaningful participation of the global community both in policymaking and the governance of these technologies. For the AI and Democratic Values index we could have just kept it as for the United States and Western countries. We made an intentional effort to expand it to 75 countries to ensure the AI practices and policy developments in Latin America, Africa, and Asia were accurately reflected.
Similarly we make a very intentional effort to develop future AI policy leaders and researchers from those regions. As I mentioned our participants in the AI policy clinics reflect more than 60 countries, in many cases countries where there is not any AI policy education or it is not accessible or affordable by these participants.
We talk about the global nature and borderless nature of these technologies, but we have to build capacity and focus on meaningful participation and engagement of these countries as well. It cannot be only the United States and Europe driving these conversations.
WENDELL WALLACH: Marc, what would you like us to be aware of that has not come up yet?
MARC ROTENBERG: I actually wanted to come back to a question you asked earlier, Wendell. You were talking about whether to be an optimist about these topics. It occurred to me that Merve testified before a House committee just a few months ago, the beginning of March, and the committee asked the question, are we ready for the AI revolution that is taking place?
I thought her answer was excellent. She said we were not, we did not have the guardrails in place, the public education, or the government expertise that we needed, but what we have seen in the United States over the last few months in response to Merve’s testimony and the statements of others has been remarkable progress.
I think it is important as we talk about AI policy and the challenges that we are facing for people to maintain confidence about our ability to develop the necessary safeguards we need. It is a little too easy I am afraid to just assume that the AI systems are going to solve these problems for us. Through all the work I have done over the last many years on this issue people talk a lot about the need for AI to be human-centric, and if AI is going to be human-centric that means that we will always need to be in charge and willing to take on whatever challenge that means. Let’s maintain some confidence, and let’s ensure that it is the humans who make the decisions.
WENDELL WALLACH: Is there a question you would like to be asked or something else you would like us to go into that we have not covered?
MARC ROTENBERG: I oftentimes include a little advertisement at the end of a conversation. We certainly encourage people to visit us at our website CAIDP.org and sign up for our newsletter, the CAIDP Update. We are providing a lot of very good information about what is happening in the world of AI policy, and we look forward to the participation of many people in this work through our clinics and other activities. We think it is a great issue. It is cross-cutting, it affects all of us in different ways, and we want people to be involved.
WENDELL WALLACH: That is a great note to end on. Thank you ever so much, Merve and Marc, for sharing your time, insights, and expertise with us. This has indeed been another rich and thought-provoking discussion.
Thank you to our listeners for tuning in, and a special thanks to the team at Carnegie Council for hosting and producing this podcast. For the latest content in ethics and international affairs, be sure to follow us on social media at @carnegiecouncil. My name is Wendell Wallach, and I hope we earned the privilege of your time. Thank you.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.