Tech, AI, & Global Norms

Mar 23, 2022

How do tech, AI, and global norms intersect to generate political, legal, and ethical dilemmas? In this event, Carnegie New Leader Josephine Jackson leads a discussion with four experts on the future of warfare, and how changing norms shape strategic challenges and tactical decision-making for national security leaders.

JOSEPHINE JACKSON: Good morning or afternoon. Thank you for joining us today. My name is Josephine Jackson, and I am the moderator for this panel.

I think it is important to know what kinds of associated issues leaders from academia, law, the military, and science and technology are confronting. I am pleased to connect these different perspectives in a panel done under the auspices of the Carnegie New Leaders program, particularly as the topic and the format link to Carnegie Council's mission of identifying critical issues of concern and convening leading experts to generate solutions.

I have been asked to not give biographical background information on the panelists so that we may move to the main discussion more quickly, so I will introduce the panelists with names and titles only: Professor Anthony Lang Jr., chair in International Political Theory in the School of International Relations at the University of St. Andrews; Professor Mary Ellen O'Connell, professor of law and international dispute resolution at the University of Notre Dame; retired U.S. Air Force General Philip Breedlove, formerly the North Atlantic Treaty Organization's (NATO) Supreme Allied Commander, Europe, and U.S. European Command Commander; and Dr. Arun Seraphin, deputy director of the Emerging Technologies Institute, National Defense Industrial Association.

The panel is structured as a roundtable discussion, which will last approximately one hour. The panelists will then answer questions from the audience. Let's get started with the discussion.

The first main question for our panelists is: From your standpoint—i.e., international ethics, international law, military, or science and technology—what do you think are the biggest challenges to international efforts to regulate or ban the use of lethal autonomous weapons, also known as "killer robots," or other automated systems?

Professor Lang, would you like to start us off, please?

ANTHONY LANG: Sure. Thanks, Josephine, for organizing this, and thanks to the Council for hosting it and continuing all the good work they do on these and all sorts of related issues.

On this first question, which is a really important one—and of course there are many challenges—I would say, first of all, I don't think it's a good idea to try to ban these weapons necessarily because I'm not so sure that once the cat is out of the bag, so to speak, it's going to be possible to ban them. So regulation seems to me a better route for us to take.

In answer to the question, of course the biggest challenge is national interests because you have every state that has this kind of capability wanting to use this capability. I think that is a challenge, but it's not an insurmountable challenge; I certainly don't think that.

I would say that the moral principle that I would start with is that global cooperation on these matters is better than national militaries or national polities trying to accomplish this on their own. Of course we do need the more powerful states acting in such a way that can lead and enable others to come along, but the hope for me would be that these efforts to regulate such technologies would take place through global fora in some way.

Just in thinking about this today—and there are lots of different things we're talking about here, automated weapons, cyber technologies, and even uses of large aggregate data to develop automated responses to things, so we would have to spell out exactly what we're talking about—but just as one small example, in terms of automated weapons that are not controlled by human beings, there are global fora where this could be addressed.

Doing a tiny bit of research in advance on this question I came across something called the United Nations Convention on Certain Conventional Weapons (CCCW), which is an existing treaty that was developed to somehow create limits on certain kinds of weaponry that were of concern to the international community, which is an existing UN body, and I came across an article by someone that suggested that this might be a nice forum through which we could start to have conversations. They have already started to have conversations in this forum, it seems to me, with groups of experts going back to 2016 and 2017.

Again, my starting principle would be there are going to be challenges, but I think if we formulate this as a global effort and try to find global spaces within which to address it, that would be the preference.

One other small suggestion I would give—and Mary Ellen and I talked about this briefly ahead of the conversation today—is that there is an ongoing effort by international lawyers to think through carefully and to regulate cyberconflict, cyberweaponry, etc., and maybe Mary Ellen can speak more directly to that if she wants. That would be another forum where you have a group of international lawyers trying to think carefully at the global level about how we might respond to these new developments and how we could in some way regulate them.

I will leave my comments there and turn it over to others for an answer to the question.

JOSEPHINE JACKSON: Thank you very much for those insightful comments, Professor Lang.

Dr. Seraphin, would you like to answer this question, please?

ARUN SERAPHIN: Sure. Thank you, Josephine, and thanks for allowing me to participate in this event.

As I think about the challenges from a technology perspective, the one that strikes me as being the most difficult to deal with is just the speed at which technology is going to move, especially these kinds of digital technologies we're talking about here. They move so much more quickly from lab to operational use than the technologies that were developed in previous decades, certainly anything that is much more kinetic or physical or "real world" in nature.

Another challenge I think is that we have seen such a democratization of the sources of potentially what we'll call "dangerous" technologies. There was a day when almost every technology that would be used in a defense/national security environment came from behind the fences of some government laboratory, so you had your eye on the source, you had an eye on the developer, and you had an eye on the user.

That has all changed. Technology development, especially in these digital circles, is much more commercialized, it is completely dual-use in nature, it is much more globalized, and it is much more local, so the regulatory environment needs to be restructured to deal with all of those sources.

The democratization of the technology is also a function of the way intellectual property flows these days and the way tech data flows these days. It is routinely shared through business arrangements, through international engagements, and through circles like NATO, for example. It is routinely sold for business purposes or through foreign military sales or those kinds of things. It is also routinely stolen these days. All of those things I think make it difficult to develop this kind of regulatory framework.

The final thing I would point out, which at least I have observed over the years as a technologist trying to work in Washington, DC, is there is a lack of policy understanding in the technical community and there is a lack of technical understanding in the policy community, especially in technologies that move very quickly. So we end up in a situation where for most emerging technologies—like I have seen in nanotechnology, environmental technologies, biotechnologies, cloning, and robotics—the regulatory community is always the police officer arriving many hours after the crime has been committed. We need to try to figure out a better way of connecting these two communities such that the policymakers can see what's coming so they can think about the frameworks and the technologists can understand the policy frameworks they will have to live within.

For example, when we really started going after better use of data for intelligence analysis—I first got involved in that in the early 2000s—one of the things that we thought about was could we have the coders code in some of our privacy regulations and sharing rules. It turns out they can basically do anything they're told to do, given a few dollars and enough hours. It's often the case no one ever bothers to tell them that these are the kinds of constraints we want on the technology or development.

Those are some things I think that we all need to think about as we try to develop the modern regulatory environment.

JOSEPHINE JACKSON: Thank you very much, Dr. Seraphin. That point on the lack of understanding is familiar to me and it underpins part of my motivation in wanting to bring together all these perspectives on the panel. Thank you.

Professor O'Connell, would you like to take a shot at addressing this question on what you think the biggest challenges are to international efforts to regulate or ban the use of lethal autonomous weapons?

MARY ELLEN O'CONNELL: Tony said the cat is out of the bag and we can't get a ban, but there is a significant proportion of the world that believes we can and must have a ban on artificial intelligence (AI) weapons, including a big segment of the tech community, and most of these positions have come through an international negotiation effort in Geneva.

In 2013 the UN Special Rapporteur on Extrajudicial Killing, who had been working on and reporting on the use of unmanned or remotely piloted weapons, came across the developing technology for completely autonomous or fully autonomous weapons and found that quite alarming and called on the United Nations through its treaty called the Convention on Certain Conventional Weapons (CCCW) that reviews new weapons for whether they fit international humanitarian law, in particular Article 36 of the Additional Protocols to the Geneva Conventions.

The process began formally in 2013 and 2014, and it has been stalled, interestingly, by the joint position of the United States and Russia that they do not want a ban, they will accept regulation, but they do not want a ban. There are at least two dozen countries, probably more now, that support a complete ban.

I am a person who supports a ban, and I will mention why, and then I will discuss the real question, Josephine, which is: What is blocking the possibility of a ban?

These weapons are going to be inherently illegal to use. They have certain features that, in my understanding of international law, puts them in that prohibited category of "other weapons." This is partly why I think a ban is really possible, because we have invented other kinds of weapons and have banned them, despite the fact that they were in use.

The first, most famous, probably analogous ban was on chemical weapons. They were used during the First World War, they had catastrophic effects, they were not very useful in terms of winning a battlefield contest, and they have been banned. We have had a few uses recently—we are going to talk about Ukraine in a minute, and we are very worried about them being used again—but for the most part the ban on chemical weapons has been quite effective.

AI weapons will be a form of mechanized killing in which human lives are destroyed without a human conscience being part of the decision to take life. In the view of the countries that support the ban, really being led by the normative position of the Vatican, which is a party to the treaty on Certain Conventional Weapons and has been a real leader normatively in Geneva, the very idea that a human being would be killed through the decision of a computer processor without any human conscience intervening is a violation—in the view of the Vatican and in my view—of a person's basic human dignity. We currently execute or slaughter livestock through robotic killing means—I'm not sure if that is ethical—but the idea that you would mechanistically kill people through the selection processes of a computer is really to deny that they have that basic human dignity. We can talk about that more in a moment.

The other reason why AI weapons will be inherently illegal is because we do not have good enough oversight of how the learning program of a computer will eventually make decisions to select and kill human beings or destroy property. Without that kind of oversight to know exactly how the machine is going to learn, what it's going to think it needs to do in the future in terms of killing—and remember, we're developing means in which these weapons can be programmed, they are going to continue to learn, they can self-charge, and they can go long distances for years at a time. It is a scenario we really should not be contemplating at all.

Here's the problem, though: Why are we having problems getting there? It's the logic of militarism that we have received from the realists that has added to this idea that tech can't be stopped.

JOSEPHINE JACKSON: Thank you very much, Professor O'Connell.

General Breedlove, we are very interested to hear your views on what you think the biggest challenges are to international efforts to regulate or ban the use of lethal autonomous weapons known as "killer robots" or other automated systems.

PHILIP BREEDLOVE: Thank you, first of all, for having me and allowing me to share some ideas. Having listened to three wonderful presentations so far, there are things I would like to touch on first and then I will give you a few of my thoughts. They will all be aligned.

I will agree with Tony that we are probably past banning them. They are going to exist, much like chemical weapons, no matter whether they are banned or not. Certain nations and entities are going to develop and employ these weapons, and, even when they sign onto the agreements, they continued to use them in the past, Syria being the latest example, and hopefully we will not see Russia do the same here in the next couple of weeks.

My original thought is to touch on the fact that this is really still very much "human in the loop." AI is a broad category. True artificial intelligence hasn't hit the battlefield yet and it will be a long time before it hits the battlefields. What we are seeing is a subcategory of true artificial intelligence now, which most people call "machine learning." Those weapons are "learned" by humans and their patterns of behavior and decision patterns are taught and learned by humans, and it is those human interactions that they take to the battlefield to make decisions.

I do believe, though, to Professor O'Connell's points, that AI is out there in the future. It is a ways out there in the future, but now would be the time to start addressing it because if we wait until it's on the battlefield it will be too late to really try to make any indention on it.

The other piece about this whole concept of the notion of killer robots that just go out and kill, is I think we need to realize that nothing is really autonomous. Even when we get to an AI system, a human will have to make the decision to launch it or place it in a position to be used or have effect, so a human being will be making that decision.

Having commanded forces in multiple conflicts and watching the way our nation makes decisions about how we accept collateral damage and the impact of these weapons, it is a human decision from top to bottom. Literally, in the most egregious of conflicts the president sets what we call the "collateral damage allowable," and then we have very technical programs that help us to guide and understand the collateral damage expected and see if it meets the guidance that is passed down from those leaders above.

What I wanted to bring to the conversation today is that while we have this notion or feeling that we have these machines that are out there making entirely their own decisions we are not there yet and will not be there for some time, and even when we do get there, there will still be human decisions in the loop of where and how they will be used. As a military person, that is what I was charged with managing on a battlefield.

I will hold the rest for questions and answers. I am going to stop there. There is a whole lot more to talk about.

JOSEPHINE JACKSON: Thank you, General Breedlove. We will be giving our panelists an opportunity to ask questions of each other when we reach the Q&A portion.

We are going to now proceed to the second main question to our panelists, and that is as follows: Can global norms that were designed for the analog world—such as sovereignty, nonintervention, discrimination, right intention, and proportionality—still hold up in an increasingly digitized world?

Professor Lang, would you like to start us off again, please?

ANTHONY LANG: Sure. I think it's a really important question because norms emerge in particular times and places and they relate to particular practices.

However, there is of course something that I have worked on a bit and I know Mary Ellen has thought about it as well, which is the "just war" tradition, which emerged hundreds if not thousands of years ago. Its principles are still ones we return to, and its principles are also ones that exist across different cultures. Although they may be called different things, certainly within the Islamic and Chinese traditions there are parallel conceptions.

Having said that as a starting point, I think the relevance of the norms that have developed in an analog age can be adapted into something that we can try to think through for this world of more automation. I will point to a couple of them, just to throw some ideas out there, because I do think they are still relevant and I think they're important for us to hold on to.

The first one is the notion of intentionality. Within the just war tradition there is an idea of right intention, which is about launching a war, not necessarily about what you do on the battlefield, which has other norms governing it. I think it's a relevant norm for us to think about in a broader moral sense as well.

One of the issues would be that, as General Breedlove just pointed out to us, we are not quite at the fully automated weapon yet—the Terminator hasn't hit the battlefield quite yet—but at the same time we are getting there, so the question of intentionality, and can a machine, so to speak, be meaningfully talked about as having intentions? I think that is problematic.

Any moral standard, any moral behavior, has to have some intentionality behind it or else we can't hold an agent responsible, we can't give an agent responsibilities for the future, and I think those are really important things for us to think about. I don't know enough about the technology that does exist to know whether or not intentionality can be applicable to some of this weaponry, so I think that a really good and important question for us to think through together, maybe on this panel, is: How can we think about intentionality?

One other one would be discrimination, which is one that is more of a battlefield question. "Discrimination" sounds like a funny term, like discrimination is a bad thing to do, but in the legal world and in the ethical world it means discriminating between combatants and noncombatants. I think this is more relevant for cyberwarfare. I know we have been talking about automated weaponry, but this is an issue in cyberwarfare because any time there is use of any kind of cyber technology, that is going to cross boundaries. We are doing everything, as we're doing right now on the Internet. To respond to a cyberattack with another cyberattack is going to have civilian implications—not necessarily casualties, not necessarily harm—but I think that's one we need to hold on to and try to be as careful as we can about this.

The last thing I would say on this one is I have been doing a little consulting work with the British military on some of these questions, and they are very interested in the question of how do we think about this discrimination question when it comes to cyberconflict. Really thoughtful people within the military are trying to work this out across different sectors.

I think it is important for us to hold on to principle, but not to be slavish about the principle either. It has to be something that we have to think carefully about: What are the implications of any kind of use of cybertechnology for broadly defined military purposes?

Again, I'm raising more questions maybe than answers here, but I think they are important questions for us to think about.

I do think, in answer to your overall question, Josephine, those principles and norms do still apply. It's just a matter of we have to be creative in how we think their application can be modified in order to be relevant for the contemporary world.

JOSEPHINE JACKSON: Thank you very much, Professor Lang, for a very insightful view on that from where you sit on this issue.

Dr. Seraphin, we would be very interested to hear your viewpoints on whether these norms still apply as we are moving into an increasingly digitized world.

ARUN SERAPHIN: I agree with everything Professor Lang said. This highlights something I said before. I admit, as a technologist with a Ph.D. in something technical—I have only a Bachelor's degree from a long time ago in political science—I had to look up these terms to see what they are actually asking us about. This is an example of, "Gosh, we didn't realize indeed you wanted us to think about these things." We were waiting for General Breedlove to write a requirement for us.

JOSEPHINE JACKSON: There is something in here for everybody, Dr. Seraphin.

ARUN SERAPHIN: I guess my base answer is, yes, all of these norms, whatever the list of them is, can apply. Those norms themselves need to be at least rationalized to the kinds of things we talked about with the first question—technology moving fast, technology changing all the time, technology being used in a much more local fashion than previously.

One of the things that occurs to me is that all of these structures seem to have built into them some kind of tradition of having time to think about their application and verify whether or not an actor is on the right side or wrong side of this norm, and I'm not sure technology is going to give people time to think about those kinds of things.

The other thing that strikes me about these norms is that needs to be some sort of built-in understanding that you understand who the actual actors are. I think one of the things about modern technology, especially in this AI space, is that it's going to provide a lot of anonymity to the actors, and we are seeing this all the time in the cybersecurity space. If the norm actually depends on understanding who is doing what and then connecting it back to their motivations and their intentionality let's say, I think it would be really easy to design technologies to get around that set of norms.

One of the things about modern digital technologies is that they are giving us the ability to collect a lot of information about what is happening on the ground and in cyberspace and a lot of potential ability to process all that information. It seems to me like the norms are based in a different era where there was a lot less signal to deal with and therefore a lot less noise to deal with. With the proliferation of information about events being enabled by technology, the difficulty in sorting through what's real and what's not real, what's now and what's old, becomes very difficult, so I don't quite know how you do that to see how those norms get applied in all of that noise-to-signal.

But again, this chimes back to there being good structures for the technologists to understand what those norms should be and would have to be in the future. That could inform both technology development and norm development so they can see and rationalize each other, and I think then we would be in a better place.

JOSEPHINE JACKSON: Thank you very much, Dr. Seraphin.

Professor O'Connell, over to you on this question of the applicability of norms as we move to this increasingly digitized era.

MARY ELLEN O'CONNELL: I think Dr. Seraphin's last point is the right one for me to jump off on. We do have structures. We do have solid norms, ancient norms. Dr. Lang spoke to a few of them.

The crisis we are facing today in this discussion—and in so many others, like Ukraine—is that we don't have common knowledge anymore. Especially in this country we have flipped our priorities, so that new inventions, new uses of military force, and new arguments in law to try to free up the right to use weapons wherever and whenever have displaced this ancient knowledge. I am really grateful to you, Josephine, for helping us to bring some of these ideas back to a very central and high-profile discussion.

When you ask, "Are these old norms appropriate anymore for a digitized age," remember the norms are not about the means by which we kill but the goal of killing, these are norms about people and the human right to life. Despite any changes you're going to have in technology, that remains. If an AI weapon is designed to kill people, take territory, and destroy buildings, it hasn't changed in what it's doing and therefore the norms that we have to hold these new technologies to. We only need to decide whether in law and ethics these should at the international level always be joined because it's through international law that we express the commonly held norms of our global world.

Any weapon has to be lawful under fundamentally the right to life, and the right to life takes on different forms depending on situations. In my mind, the most important additional rule that protects the right to life is the prohibition on the use of military force.

We have a UN Charter that has restated and codified that ancient rule in which you do not go to war for reasons such as countering terrorism. This is a clear prohibition of the UN Charter principle on the use of force. It doesn't measure up. That is one major example since the end of the Cold War where the United States has been instrumental in weakening the norm so that people no longer think that is a norm that should bind.

When you start to weaken these kinds of rules, what are we putting in place? The logic of realism, which teaches you to constantly seek military advantage. Under that idea, of course you are going to invent any kind of weapon and try to speed up and stay ahead of the technology being developed by others.

We need to go back to understanding that we have to prohibit war and we have to prohibit and very seriously limit how we engage in war in order to preserve our humanity. Yes, as General Breedlove said, you will always have criminals, those who are violating the rules, and we see that constantly, but that doesn't mean that the world doesn't try to uphold its core norms through good laws, through good treaties. That is what we should really be moving toward so that we can distinguish the law violators from those who are complying with the law and really care about respect for human life and human dignity.

JOSEPHINE JACKSON: Thank you very much, Professor O'Connell, for offering a lot of very rich insights on this really, really difficult question of where norms sit between these two realms.

General Breedlove, can you weigh in on this debate, please, from a military standpoint based on your experience?

PHILIP BREEDLOVE: The first thing is I don't disagree with anything that has been said. I thought Professor O'Connell's remarks were wonderful. As I understand what she was saying, almost all of that applies to the human making decisions before these kinds of weapons are employed. I think that is the key.

I was going to entitled my little pitch on this "Good News and Bad News."

The good news is that if we are really talking about the weapons today, the weapons again are taught by humans, and their "rule sets," as we call them, inside their decision trees are human-designed rule sets. And the good news there is that machines are pretty good at obeying rules—it's a 1 or a 0, they either are obeying it or they're not obeying it —unlike humans, as we see in Ukraine now, who know the rules and then go in and do exactly the wrong thing.

So I think the good news is, as we look at applying global norms to these weapons that we're talking about for a well-intended nation and a well-intended military force, you absolutely can build the rule sets into these weapons that adhere to what we see as just war, etc. Things like sovereignty are really easy because these machines understand boundaries and geography.

But not to trivialize, because on the bad side this is really bad news. If you have a nation that is not well-intended and does not intend to be in accordance with rules, then they build in a very much wider rule set that allows much more collateral damage in the way they are employed. Then you end up with things like Grozny in the past or like Mariupol today in Ukraine, where you get horrible application of force because of the horrible intention of the people who programmed the rule sets into the weapons.

I am very much onboard with adhering to global norms, and this is all about the intentions of the humans who do the programming. And remember, even when we send a weapon out, if we call it "autonomous," it still has rule sets that are set by those who send it out. I believe that the appropriate application of rule sets can make these—we never want to aggrandize—weapons be used usefully in certain situations in a war.

JOSEPHINE JACKSON: Thank you, General Breedlove. Since you have two minutes left, can I ask you a question that might be helpful for our viewers?

PHILIP BREEDLOVE: Sure.

JOSEPHINE JACKSON: How does the military define a drone or a lethal autonomous weapon? What is your working definition of that?

PHILIP BREEDLOVE: I think you already know my answer to this because of the work you did on your Ph.D.

I hate the word "drone" because the word "drone" connotates this thing that goes out there mindlessly and executes things, and it is anything but that.

I don't use the word "drone." I use "remotely piloted aircraft" to point out that all of these weapons to date still have a human in the loop. The human is either actually flying it or it is programming it at a keyboard and then changing and adapting to what it observes.

"Drones" is this easy word and makes it sound good, sort of like a Star Wars kind of thing, but in the real-world application of these kinds of weapons today humans are in the loop constantly.

Now, we do have the old-style weapons too, the "dumb" weapons that just fire and go and can't change their path on the way, and that's what you're seeing right now in Ukraine, indiscriminate application of indiscriminate weapons. That is the difference.

So I don't use the word "drones." I always remind people that it is a "remotely piloted aircraft" and that there is a human in the loop.

JOSEPHINE JACKSON: Thank you very much, General Breedlove, for that add-on, reminding us that it's important that we are not all coming to these debates with the same level of understanding, definition, or importance that we place on these terms, and it's good that this discussion is bearing that out a bit.

We are now going to proceed to the third and final main question to our panelists before we proceed to the Q&A session of this discussion.

The third and final main question to our panelists is: While this panel is not about the war in Ukraine or other current world conflicts, have your views on tech, AI, or global norms evolved in connection with recent events, technologies, and the constraints around these, and, if your views have evolved, how have their evolved?

Professor Lang, would you like to begin, please?

ANTHONY LANG: Sure. A really important question. I will say two things about this.

One kind of follows up from General Breedlove's last point, which I think is really useful to hear. As we see in Ukraine now, Russia is using incredibly indiscriminate weapons. I don't want to be too celebratory about the precision of new technology, but it does seem like new technologies can be more precise and perhaps avoid some of the indiscriminate destruction that we see being used by Russia currently. I think that's an important thing for us to consider. I don't want to say any weapon is a good thing, but perhaps there are ways in which some of this technology can help us in conducting warfare when it has to take place. That is my first thought on the current situation and how this might be relevant.

The second one is a wider one. I hope it makes sense. This is about cybertechnology. As we all know, Russia has been a user of "soft" power or cyber means to not simply target particular installations or technologies or situations, but to actually try to frame wider narratives through the use of social media. I think that should be considered a form of warfare in a way. The way you try to shape a narrative structure is crucial and really important. I think one of the things that is interesting about the current conflict is how narrative-driven it is or how much perhaps we are now more aware of competing narratives.

For instance, it's fascinating the way in which Russia wants to re-narrate this conflict as "the Great War all over again." I think we need to take seriously their efforts to do that because that does then create beliefs and justifications for action that become problematic, to put it mildly, in the same way that in a post-9/11 world—something that Professor O'Connell mentioned before—the United States narrated a particular way of understanding terrorism and understanding global conflict, which also had I think a great many negative consequences. So I think being aware of how cyber technologies enable us to narrative conflicts.

The last thing I would want to say about this is one of my responsibilities I think of as an academic is to enable students to really understand the way in which they are being shaped by social media and the way in which the narratives that they come across, that all of us come across—the way in which we describe warfare, the way in which we understand what the stakes are in a conflict—is the result of things they may not even be aware of and that they need to be more critical in their ability to understand the news.

Just as a small example, I think it was a huge mistake for British Operational Command to ban RT, the Russian television station, because it's important for students and all of us to see what Russia is saying, how they're narrating what's happening, though we disagree with it. We have to be adults about this and see what the other side is saying.

Again, that's television, so it's an old analog, but I think cyber enables social media to play a crucial role and we're seeing it front and center in this conflict. We need to be better consumers of those narratives and be more careful in how we understand those narratives. Again, that is not directly a weaponry technology, but that is maybe a wider point to be made about the current conflict and how it relates to some of the issues we have raised here.

JOSEPHINE JACKSON: Thank you, Professor Lang. You have one minute left in your allotted time, so I would like to ask a question. You mentioned being better consumers of information. Information is shaped to some degree by norms, by things like AI now and also tech. Would you say that you are more optimistic now about where AI, tech, and global norms are going, or do you feel a bit more dejected about the trends? If you can summarize in a minute, that would be great.

ANTHONY LANG: That's a big question for a minute.

Just to go back to a point that Dr. Seraphin made in the very beginning, which is the democratization of these technologies and the way it is coming from a huge number of commercial sources and all sorts of things, on the one hand, that is happening, and I don't think we can stop that. I think more information is better, so I think the growth of all sorts of ways that we can learn, etc., is good.

I think technology makes a huge difference in all of our lives. I am a diabetic and I have an insulin pump that has made the management of my diabetes incredibly easier. That's just a small example, but there are lots of in which technology is a good thing. I am a big fan of technology. Of course, it is just a matter of how we think about these things and how we regulate them.

That is a pretty weak answer to your big question, Josephine. Maybe we can return to it in the discussion in the second half.

JOSEPHINE JACKSON: Well, you sound like an optimist, and there is room for optimism in these issues.

Before we get too much further into that, we would like to hear from some of our other panelists on this point. Dr. Seraphin, would you care to weigh in on whether or how your views on AI, tech, or global norms have evolved in connection with recent events, technologies, and the constraints around these?

ARUN SERAPHIN: I will modify the question a little bit and change it to: Given what is going on in Ukraine, what are the implications for the tech community? Just a couple of observations, some of which I will just copy from Professor Lang.

One is, at least the way I would have said it, the importance of information operations in this Ukraine environment is just amazing to me at all different levels. The flow of information from all of these different kinds of sources constantly and being consumed by the public, decision-makers, local military leaders, and strategic thinkers, simultaneously with, like we've talked about before, precious little ability to discern between good and bad information, real and fake information, has really shaped and constrained what a decision-maker is allowed to do and what they're not allowed to do given political realities. At least in my lifetime I have not seen that so much before.

I will just say one for example. According to public press reports, we were making more use of information that we would have considered classified, controlled completely, to use as an information operation against the Russians. That is just a new way of thinking about things that we hadn't really done before, at least to my knowledge.

Where does this drive me to in terms of being a science and technology person? I think that the Department of Defense need to invest more in you people in social science in the understanding of the implications of information on the political and policy environment and how that will impact your ability to prosecute military operations and goals. That is one thought.

Another thing we are learning in this space is that this is classically a coalition environment and there is a huge need for interoperability, not just of communications and networks but also of systems as a whole and logistics, and we are seeing all of that play out.

But to this point I think we need interoperability of those norms, and that then drives you down to what's the tech part of this. At least the way I think about this, those norms are actually applied at a very local and tactical level, and hopefully they are shaped by a grander understanding that these norms are a bit eternal and deserve to be preserved.

If they are not trained into how units operate—or in the case of what we're seeing happen on the ground there some of the people who are now combatants had no training—even if they are the greatest people in the world, these norms were never trained into them, so now how are we going to deal with that moving forward? I always think that technology can help through those kinds of things.

The last thing that occurs to me that technology hasn't really helped with is that one way that the West is projecting force in this environment is through the use of economic sanctions, and unlike almost any other weapon that we would launch at an enemy—and I think General Breedlove talked about this a little bit before—we don't really understand the weapons effects, we really don't understand the collateral damage, and we really don't understand how to turn it into a precision-guided weapon. Therefore, the sanctions seem to me to be a pretty dumb bomb at this point.

Maybe the technical community can help with understanding better the broader impacts and collateral damage that result from the use of these kinds of sanctions, and also understanding how these will work or not work as well if you don't have a complete international community alongside you, and also understanding then: "Faced with this situation with Russia I can use these kinds of sanctions. Faced with another situation with"—let's pick a country at random, China—"I would have to use a different kind of sanction, or maybe no sanction would work."

Those are the kinds of things I think and I hope my community starts to think about and the Pentagon starts pushing us to think about moving forward after what's going on here.

JOSEPHINE JACKSON: Thank you very much, Dr. Seraphin, for adding in a lot of useful insights into this debate and making some broader connections beyond just tech, AI, and global norms, which are important for us to consider in the broader context of these questions.

Professor O'Connell, would you please walk us through how your views on tech, AI, and global norms may or may not have evolved in light of recent events and constraints?

MARY ELLEN O'CONNELL: My views of the norms haven't changed. As I pointed out, these are ancient norms, they are related to our greatest spiritual insights into human life, and it's part of my work to explain why they won't change. That hasn't changed.

What is very, very troubling to me is the point that Dr. Seraphin made: Why are these norms so poorly understood? Why do we have to talk about reinventing norms for a digital age? Why are these not so deeply embedded that people have an instinctual understanding of them to shape the kind of technological developments that we're going to make?

I thought at the beginning of the Ukraine conflict we would, in the horror of what was happening, understand the need to reeducate ourselves in what these norms require in all aspects of life. Unfortunately, I think the logic of realism, militarism, and the advantage of militarized tech is proving stronger than the norms protecting the right to life, the global human what we call a "peremptory" norm, a fundamental norm prohibiting the use of force and restricting the means of killing to what is required for proportionality, necessity to win the battlefield advantage and nothing more.

I don't want to overstate. This is a crisis moment, but there is vast knowledge. There are so many countries that support a ban on AI weapons. There was a petition sent to the Geneva negotiations just a few years ago signed by over a thousand technologists working in AI, including such well-known people as Stephen Hawking and Elon Musk, supporting a ban on AI weapons. So there is knowledge and there is hunger. I am afraid that the most technologically advanced countries in terms of resources devoted to military development of AI are the ones resisting.

We will see what happens through Ukraine. I am hoping something much better.

Let me say that it has been a long road to get to Ukraine and it has been driven often of course by the mentality that I talk about, which is constantly looking for the next advantage in developing military weapons to defeat an opponent.

You really see the break from prior norms at the end of the Cold War. It is really Bill Clinton who makes such a fundamental break that we can trace to Ukraine, and the critical question is now: How is the world going to react to Ukraine? Are countries like the United States, Great Britain, and France going to continue this logic of moving away from fundamental norms or not?

In 1999 the United States used its high-tech air power to bomb Serbia and Montenegro for 78 days. It was the first use of military force by the United States since the adoption of the UN Charter in 1945 in which the United States put forward no legal justification for a use of military force that ended up in the deaths of 20,000 people. I trace the decline of norms to that point.

This moment of culmination and horror over Ukraine is a critical turning point. We can either continue with this logic, adopted by Russia now and increasingly looking interesting to China. If we are going to turn this around, it will take a huge effort not just by technologists but by everyone who wants to see what for some reason has become he new phrase of the moment, a "rules-based order." We're not going to get it through advancing and trying to say we can stay ahead in developing weaponized uses of technology and especially through artificial intelligence.

JOSEPHINE JACKSON: Thank you very much, Professor O'Connell, for your insightful views. These are very, very complex issues, as you point out. It is very hard to distill them down into just three-to-five minutes of speaking time, but I think you succeeded commendably at that.

General Breedlove, we would love to hear from you on how your views on tech, AI, and global norms may have shifted or evolved in light of recent events, technologies, and the constraints around these.

PHILIP BREEDLOVE: First of all, what a great question, and again three very rich sets of commentary so far and so much that I agree with, a few things I don't, but just really good. I want to highlight just a couple of things.

Dr. Seraphin, your discussion of information ops I think is incredibly poignant. We know all the way back for almost a decade that Russia and other actors have been attacking us and using social media to get each of us to attack each other, and these weapons are not well understood and are getting extremely capable. A moment of incredible confusion in the Ukraine crisis is when Russia put out a deep fake of President Zelenskyy basically giving up and the confusion in his military it took to clear that up. These weapons are becoming very, very capable.

I also would like to go back to the comment about what's going on in Ukraine and now we have citizen soldiers taking to the battlefield and they are not trained.

It's another good news/bad news sort of thing. It's good news I think that, much like in the American Revolutionary War with Britain, citizen soldiers took to the field or we would have lost. But the bad news is exactly as you point out: these folks are not trained in the Geneva Convention and the rules and laws of war, and even though humans will choose sometimes to not obey them, at least the real soldiers are trained in them.

I do want to agree with Dr. O'Connell as well that norms do not change. This is about our basic value systems. The sad news is that there are large parts of this world that don't share our value systems and therefore they build their approach to war on a very different set, and that's tough. I am a huge proponent for understanding what it is and why it is that we do certain things and that those who work for us and the weapons that fight for us are built in accordance with those norms.

The one thing I would tell you in answer to the basic question is that Ukraine has deepened and heightened a concern that I have had for a long time. That is that our global institutions are completely unable to address those nations who get outside of these norms that Dr. O'Connell talked about.

We see right now in Ukraine a huge example of that. Russia is fighting a criminal war. Almost everything it is doing on the battlefield now does not adhere to any of the norms we have been talking about. It is unable to fight Ukraine's military in the manner that it wants to fight, so now it is taking the war to the civilians. It could not be more criminal in nature.

What we find, as we have found sometimes in the past, is that the United Nations is completely helpless to do anything about it. They make some proclamations, but they are not going to change Mr. Putin's actions on the battlefield. They haven't yet and they won't in the future.

I believe in these institutions, I believe in the United Nations and I believe in NATO, but what I would like to see is that we have the ability inside these institutions that when a nation goes rogue like this one to bring the ability and the power to change that bad behavior. More than anything, what Ukraine has really pushed me hard on is that we need to be able to have a set of institutions that can bring order to a problem where a nation has absolutely chosen to exist outside of those norms that we talked about.

JOSEPHINE JACKSON: Thank you very much, General Breedlove, for those insightful comments.

We have now concluded the main discussion session of this presentation and we are going to move now into the Q&A session for the balance of the time remaining, which is about 25 minutes.

I would like to take this opportunity as we begin the Q&A to open the session up to the panelists. Do you have questions for each other?

MARY ELLEN O'CONNELL: I certainly do. I have a question first for General Breedlove and Dr. Seraphin.

General Breedlove said in his opening comments that we are not yet at fully autonomous robotic weapons, that we still have humans in the loop. Of course, there is a huge amount of automaticity already in the use of remotely piloted weapons and other kinds of weapons, as he mentioned, and I think that has had a deleterious impact on what we consider to be acceptable killing in the United States. I think there is still a great deal of criticism of how the United States uses its technology of war, but I believe it is correct to say that we already have prototypes on the drawing table for fully autonomous robotic weapons that have no human in the loop in terms of the critical aspects of target selection and execution.

The other question I have is for Dr. Seraphin. We have come up with and are developing, and I believe I am correct to say we already have prototypes of, fully autonomous robotic weapons. Do you see the ability to counter tech developments in what I consider to be highly unlawful and unethical ways? Can we come up with the tech to defeat AI, so can we counter it with counter-AI?

I have been thinking about this a lot in the cyber area, which I do not believe is an area of weapons—we have a question, and I would like to talk about that more, about whether we need a cyber Geneva Convention—and my interest is in looking at cyberspace as economic and communications space and what benign technologists should be doing to keep us safe from the criminals, from those who would try to harm us physically, financially, intellectually, or emotionally through harms in cyberspace.

Would it not be possible to both redirect certainly U.S. technological efforts into a true defense—which is where we were supposed to be after the end of the Second World War—defending cyberspace from malicious uses and also defending us from inventions of fully autonomous robotic weapons?

PHILIP BREEDLOVE: I really appreciate Professor O'Connell's question. I am hoping that I did not say it exactly how you asked the question.

I think what I said was we don't have any true AI on the battlefield yet. We all know that the instances of true AI are very limited—the game of Go and a few other things—but real artificial intelligence is limited. Rather, AI I think is overused in the world to talk about a whole category of technologies that are leading to the point where machines actually think independently and have new thoughts by themselves, which is what AI is.

One of those categories is the one that is used most in military technology, which I mentioned before, which is machine learning, where humans teach a machine and help a machine to repeat and learn certain rule sets and applications of those rule sets in the space. What we really have approaching the battlefield now is the machine-learning track of an overall AI moniker, but it's not true artificial intelligence. That is a long time from flying or being used in weapons.

But the point is still good that you make, and that is, as I think I said, later we need to get in front of this now because it will come. It will come, but as we train machines and as we build the rule sets for AI applications it is human in the loop, the human is training the machine, the human is writing the rules sets that are in the software, etc.

Even when sometime way out in the future we get to true AI devices that will go out and think independently by themselves, it is still going to be a human who decides to launch that weapon into a combat space with a set of rules of engagement expected in that combat space.

I don't think we are ever going to get to the point where we have that spacecraft sitting on the ground and it decides by itself to go off and kill somebody, rather, we are going to always have weapons that a human decides to send into space with a set of rules onboard that guide it.

But I am not trying to trivialize this. We have a problem in front of us, it is good now to get after this and start talking about the norms, because once we get there it will be too late.

I would just reiterate my good news/bad news: In all of these situations nations that want to do the right thing will seek to do the right thing. Nations that do not care and just want the battlefield advantage that Professor O'Connell talked about are going to send their machines forward with a very different rule set.

JOSEPHINE JACKSON: I think this links nicely with a question that is in the chat now that is posed to all panelists: "Two frontiers for tech norms are cyber and outer space. What are the prospects for a digital Geneva Convention in cyber and a treaty to ban the militarization of outer space?"

MARY ELLEN O'CONNELL: I don't mean to speak too much, but I do work in both those areas, cyber and space law, and I will say briefly just some factual points.

To my mind there is already the most important model of treaty for cyber regulation, and that is the Budapest Convention on Cybercrime. Again, let me reiterate—we don't have time here, maybe in another discussion, Josephine—my firm view that we should not equate cyber with weapons. It is a communications means, so let's keep it out of trying to regulate it under the Geneva Conventions. I am a big critic of the idea that you should have a handbook of cyberwarfare, something many of you will have heard of, the attempts through the Tallinn process.

On outer space it us even easier. We have treaties already in place, the Outer Space Treaty and the Moon Treaty. The United States has been resistant to interpreting those treaties the way most of the world interprets them, which is that outer space should be space that is free of military uses of all kinds. We have spaces like that already on the globe. Antarctica is a military-free zone. Outer space should be as well.

We are desperately in need of strengthening to make it very clear to countries like the United States, China, and Russia, which are interested in using space for military purposes, that the international community rejects that fundamentally and in the clearest possible terms so that there can be no confusion or views by those who say, "We can just invent another weapon that doesn't really fit the treaty."

We have seen a lot in the post-Cold War period of playing around with norms in order to make space for technological developments that in the realist mindset keep a country stronger and ahead of others whereas all it really does is break down the norms we all need to live by and flourish on this planet which so desperately needs our help.

ANTHONY LANG: Could I come in on this one too? I think it's a great question about the prospects.

I can't predict if it is going to happen or what we will get. I think Mary Ellen's points are really important, that there are existing frameworks that we can build upon and work within, but one of the things I guess I would put out there is what is really necessary.

Professor O'Connell has mentioned realism a few times now. Let me invoke a realist, Hans Morgenthau, who famously at the end of [Politics Among Nations], kind of a textbook on international relations, argued that diplomacy is the most effective means to create peace in the world, which I am a strong believer in.

What is really interesting and is really important—and as I am sure everybody on this panel and in the audience knows too—is that creating treaties, even revising treaties, including new dimensions within treaties, requires hard diplomacy, really hard work. That is not easy because diplomacy is all about seeing the perspective of the other and about trying to figure out what the other's needs are.

When we talk about something like outer space, I have learned a lot from my colleagues here—Adam Bower, who works on outer space law; we have a visiting fellow now, Mike Byers from Canada, an international lawyer who has also done great work on this—it is going to require taking into account not only state interests now but commercial interests too. I am learning a lot from them about outer space and the dimensions and potentials for treaties that exist already and that we can build upon. How you do that kind of public diplomacy today I think is something we have to put our heads very carefully to. We begin with the legal frameworks, we begin with the existing institutions, and we try to then see what we can do by bringing people onboard, and it's going to require a lot of diplomatic work to do that.

My hope is we do get some new conventions, that we do get something that can protect us from a lot of this stuff, but I think it's going to require a lot of diplomatic effort. Countries like the United States, like Russia, like China, the big countries, have to be out front in saying: "Yes, we've got to talk to each other and we're going to have to figure out how to meet each other's needs and how to take into account each other's perspectives in order for us to create a peaceful world."

PHILIP BREEDLOVE: Just one short note. I'm not a lawyer, so I'm going to avoid all that, but I do agree with what both presenters have brought forward.

I want to take it back to the question just before. Sadly, there are conventions out there, as Professor O'Connell talked about. Also sadly, they are already being not applied. For instance, space is already militarized and it's going to get more militarized. My previous remark I think is also applicable here: What we don't have are institutions that can enforce those norms that I think we would all agree to.

ARUN SERAPHIN: I would add to this—it's clearly not something I am going to be able to add to this expertise here—is all I would say is that I think technology actually might help speed the process of creating these kinds of conventions or norms in these new spaces, whether it's cyberspace or space space. That is partially because it's democratizing the ability to perform those things that we used to call diplomacy, and maybe think about it in the case of what's going on in Ukraine. It was a pretty grassroots, democratic set of behaviors and voices that drove major companies to disengage as part of the sanctions effort against Russia, despite it being in their economic best interest to do exactly the opposite.

I am wondering if the technologies that are causing these problems can also be part of the solution here because of their ability to gather people in a different way and share information in a different way, to push out and put pressure on decision makers and the like.

JOSEPHINE JACKSON: Thank you, panelists, for your thoughtful views on that question.

We have one other posed to all of the panelists, and the question is: "What is the biggest change in global norms that you have seen during your career or the most impactful change that you have seen?"

ANTHONY LANG: I guess we should probably keep the question focused on our topic, because there are of course lots of norms that have evolved, although I am in complete agreement that these are things we should hold onto from a longstanding tradition.

What I would say, just in terms of a norm that has unfortunately been undermined by a lot of this—again I am thinking more of the cyber side of things—which is the distinction or the discrimination between combatants and noncombatants.

I think it's only because of the ways in which we are able to shape people's views—we being everybody, through social media—that to me is troublesome. Not that I think social media is a bad thing because it's a great thing, it can do lots of wonderful things, but when it comes to these military matters and thinking about the narratives of conflict and what we can do and how cyber tech can then actually impact people's lives directly in negative ways, I think that's a real shift in norms. It doesn't help us uphold that longstanding tradition that warriors on the battlefield do one thing and they should be keeping that as much away from civilians as they can. Maybe that is inevitable.

It goes back to Arun's point about democratization. I think democratization is happening across so many sectors that this is a place where more and more people know what's going on, get involved with what's going on, and that creates problems.

As General Breedlove was saying too about volunteer soldiers and stuff, the British military has been very clear that no British soldier should be allowed to go overseas to volunteer to fight in Ukraine even though they do have the normative training, then they become volunteers working, which is difficult to think about. It's a good thing and it's a bad thing.

That was a very rambling thought on that because lots of norms have changed.

PHILIP BREEDLOVE: I would like to violently agree with Professor Lang's comments about social media and how we can set norms and enforce norms out into the future on a tool which I don't think we have any idea how effective it has been in working inside our countries in the elections and other things. We have to get a grasp on these tools. I will just violently agree there.

One thing I want to bring down to a very narrow view, one of the sets of norms that really concerns me. I work with the Nuclear Threat Initiative, founded by Senator Sam Nunn, and we live and talk about nuclear war being unwinnable and should never be fought.

Yet, in our world we are stepping back away from norms that we had in place for a long time. The one I would like to just point out is the Intermediate Nuclear Forces Agreement, where we limited these very belligerent and very provocative short-range nuclear capabilities. We knew that Russia had been abrogating the treaty for some number of years, but I think we should have tried to pull them back in the treaty. Instead, even though we were the only one left in the treaty, we decided to exit the treaty.

All of this is just really disappointing to me. I think we should be reaching towards those norms. And now we're at threat of losing the Strategic Arms Reduction Treaty and others. One thing that just concerns me is that in the past we have worked hard to get some of these agreements and norms out there, and in our world today we seem to be falling back from them.

MARY ELLEN O'CONNELL: The biggest change that I have seen is a U.S. move toward an intellectual construct of exceptionalism that came about at the end of the Cold War. Most great countries think they're exceptional, but the United States, after the idea that it had been the United States that won the Cold War, put international global norms, laws, and institutions into jeopardy when we no longer saw the critical factor that the success in all of these normative areas, these intellectual constructs, requires the attitude of equality, that all countries are bound by the same principles.

We thought we were above the rules, we had the tech, we could do things, we knew better, and we were always going to be acting for the good when we were using military force for humanitarian intervention in Kosovo or using military force to bring down the regime of Saddam Hussein in Iraq in 2003. This has to change for the United States to lead in the way we did after the end of the Second World War in rebuilding a world that is based on commonly held norms.

The one problem I see with social media and how that might add to it is it doesn't seem to be having this impact on the decision-makers—in this country for certain—in convincing us that we need to be part of the world, not trying to dictate to the world. We are going to have another opportunity. We need to support Ukraine and end this horrible conflict—Ethiopia as well, Myanmar as well—and we can only do that through a reset that is committed to what the international community has developed over its 400-year existence.

JOSEPHINE JACKSON: Thank you very much, Professor O'Connell.

ARUN SERAPHIN: I don't think the norms themselves have changed. They are norms. They are something that come from us as people, as humans. It's almost like I would think of them as these are laws of physics and you just deal with them.

The thing that I have seen change over my life is (1) technology allows people to evade the norms if they so choose to, and then (2) much more practically, I think it would be surprising, at least to my kids and people of their generation, to even hear that someone thought there was a norm of nonintervention because for their entire lives all they have seen is intervention, whether it is in the real world of in the cyber world, and I think it would be a surprise to them that someone was feeling that this was a violation of a norm. Although I think they would agree in the end that yes, that is a violation of something that should be a norm.

Going back to what Professor O'Connell is saying, maybe the biggest change is that people don't actually know the norms and don't know how to apply them to their current realities and maybe there was a previous era where it was much more ingrained in them. I don't know. It's a question worth thinking and talking about longer.

JOSEPHINE JACKSON: I would agree, Dr. Seraphin.

This has been such a great discussion on tech, AI, and global norms, and it could certainly go further. There are so many interesting and insightful strands that came to light here. I wish we had a few more hours to explore them, but unfortunately we do have to draw the panel to a close at this time.

Before we go, it took many people to put this panel together. I want to briefly express my gratitude to the Carnegie Council for hosting and supporting us, to all of you panelists for sharing your valuable expertise and insights and agreeing to participate amid your very busy schedules, and to the audience members for joining in and asking questions.

If you are interested in viewing the recorded panel discussion, I am told it will soon be available on the Carnegie Council website.

It has been a pleasure being with all of you today. Thank you. We will close it here.

ANTHONY LANG: Thank you, Josephine.

MARY ELLEN O'CONNELL: Thank you, Josephine.

PHILIP BREEDLOVE: Well done. Glad to be on this panel with you all.

You may also like

NOV 10, 2021 Article

Why Are We Failing at the Ethics of AI?

As you read this, AI systems and algorithmic technologies are being embedded and scaled far more quickly than existing governance frameworks (i.e., the rules ...