Killer Robots, Ethics, & Governance, with Peter Asaro

Feb 11, 2020

Peter Asaro, co-founder of the International Committee for Robot Arms Control, has a simple solution for stopping the future proliferation of killer robots, or lethal autonomous weapons: "Ban them." What are the ethical and logistical risks of this technology? How would it change the nature of warfare? And with the U.S. and other nations currently developing killer robots, what is the state of governance?

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

This week's podcast is with Dr. Peter Asaro. Peter is an associate professor and director of graduate programs at the School of Media Studies at The New School. He is also co-founder and vice-chair of the NGO, International Committee for Robot Arms Control and a spokesperson for the Campaign to Stop Killer Robots.

This podcast is the latest in a series on artificial intelligence and ethics, with this week’s focus on lethal autonomous weapons systems or killer robots. Peter co-founded the International Committee for Robot Arms Control in 2009, so he has been one of the leading voices on this subject for a while.

We had a comprehensive talk about lethal autonomous weapons, touching on the state of the technology, the many risks if they are deployed, and questions about governance.

For a lot more on AI, including podcasts with IBM's Francesca Rossi, Harvard's Matthias Risse, Johns Hopkins' Heather Roff, and Western Michigan's Fritz Allhoff, you can go to carnegiecouncil.org.

For now, here’s my talk with Peter Asaro.

Peter, thanks so much for coming today.

PETER ASARO: Thank you for having me.

ALEX WOODSON: I've done several podcasts on artificial intelligence (AI) in the past few months. I have not talked too much about autonomous weapons systems and killer robots, so I think you're going to be a very good person to speak to that subject.

Just to get started, where are we now in terms of this technology? What has been new in the last year or two?

PETER ASARO: I just have to start by defining what we mean by "autonomous weapons." People tend to picture something like The Terminator, but what we're really talking about is any kind of weapons system that is automatically selecting and engaging targets without human supervision. Generally, this is done with software, artificial intelligence that's coupled to some kind of sensor network, so it's using radar, video cameras, or some other kinds of sensors to identify what is a target and what's not a target, and then applying lethal force, firing some kind of weapon, a projectile, missile, bomb, something like that.

We have been working for the past seven years now trying to get an international treaty to prohibit fully autonomous weapons systems of this nature. At the time we started that campaign seven years ago, we said it would be about five to ten years before we started to see these weapons being fielded. And where we are now, is that a lot of countries have advanced development programs for different kinds of systems that are using autonomy and the critical functions of targeting and engaging weapons, and they're testing them.

I think it's safe to say at this point there is not really a "fielded" autonomous weapons system. There are historical weapon systems, missile defense systems, that you can argue have those capacities, but we're not really trying to ban those types of systems because they're primarily anti-missile systems, they're not anti-personnel systems, they're not targeting vehicles or buildings or anything where people would be inside.

Essentially, right now we're still trying to get a treaty. Countries are dragging their feet and producing these systems at a rapid pace, and that's what is quite worrying. I think our original prediction is fairly accurate and that we're a few years away from seeing these systems being deployed. They're already in testing and in operation, but countries aren't actively using them in war zones at this point.

ALEX WOODSON: Which countries are leading this technology at the moment?

PETER ASARO: The United States, of course, has one of the most advanced technical bases and also has been spending the most in research and development, but close behind we find China, Russia, Israel, South Korea, and Turkey, all of which are developing systems that would fall into these categories.

ALEX WOODSON: Have you noticed a change in how the United States is approaching this technology in terms of the Obama administration to the Trump administration, or has this just been progressing at a natural rate?

PETER ASARO: My sense of the technical development is that it has not been that much influenced by the presidential administrations. Certainly their activity in the United Nations and their engagement with UN processes has changed pretty dramatically. I think it's pretty well known that the Trump administration doesn't really like the United Nations and has stepped away from the Human Rights Council, UNESCO, and other kinds of international cooperative agreements, and in many ways hasn't been leading this discussion at the United Nations and the Convention on Certain Conventional Weapons (CCW), which is currently debating the issue, as in the early years when it was still the Obama administration, and they have been much more quiet and not really putting forth an agenda to move that discussion forward.

ALEX WOODSON: I want to come back to the United Nations and some of these regulations.

First, what are the specific risks of these autonomous weapons systems? What are you afraid of seeing happen if these are deployed on a large scale during war?

PETER ASARO: I think there is a number of different kinds of risks that these autonomous weapons systems entail, all of which I think are critically important and together overwhelmingly argue in favor of a ban. The first kind of risks are accidents, civilian incidents, and mistake sorts of risks. These are all systems that are going to be using software. We don't know how these things are going to be tested or how reliable they're going to be. Every day we're finding new forms of bias in algorithms that are designed to make judgments about people, and these are going to be systems that are making judgments of life and death about people, so there is a whole set of concerns around that, and then, of course, what the civilian impact will be in relation to that.

There is a set of concerns around automating military decision making to the point where these sorts of systems, if you had two countries with automated border control systems that might accidentally run into each other and start an engagement, they could start a war, and systems could escalate wars, all without any kind of human approval or military or political decision making behind it, and you could wind up making wars more often, making wars get more aggressive or more violent more quickly and things of that nature.

Coupled to that is the possibility for arms races. You have states that are developing these systems, and their neighbor competitive states are thinking, Uh, we're going to get left behind, so we have to build these things. So you wind up with a rapid development and acquisition, and of course that's a huge waste of resources, but it also creates dynamics of instability.

When states are in a sort of détente kind of situation, you have some relative stability. If states understand the nature of their weapons systems and the nature of their adversary's weapons systems and are fairly confident about that, they're much more stable in terms of their relation because they know what they can do and what they can't do. But these systems introduce a whole lot of uncertainty in terms of what the systems will actually do and how capable they will actually be. They promise to offer all sorts of advantages, which means if you're prone to war, you might be much more likely to use them, but they may not actually work.

Then you have the possibility of engagements where you have to decide whether this was a system that engaged accidentally or it's like a rogue system or it got hacked, or it was actually the intention of a country to use this weapon to attack your country. So now you start to get diplomatic forms of uncertainty as well of what are the intentions and what are the possibilities of real threat that could come from an assault. That creates a whole set of issues around global stability and regional stability and instability.

Then you have the questions around weapons of mass destruction. One aspect of these weapons is that they could eliminate some of the bulwarks of existing nuclear deterrent strategy. One of the big elements of that is underwater submarine-launched nuclear weapons, but you could create autonomous vehicles that could follow around submarines for years without surfacing, refueling, or signaling anything and then could be potentially activated at any moment to destroy the submarine that they're following. If those systems were effective, that would eliminate one of the "nuclear triad," as they call it, which is the guaranteed response to an initial attack, and that could again destabilize the entire nuclear situation.

There are also worries about autonomous weapons becoming a new kind of weapon of mass destruction. We traditionally think about biological, chemical, and nuclear weapons as weapons of mass destruction, but the essential element there is that an individual or small group can deploy a weapon that has mass-casualty effects and mass-destructive effects. But if you have an army of robots, you are again in a situation where an individual or a small group could deploy these weapons to mass-casualty effects. So it could be its own new class of weapon of mass destruction, which has its own kinds of political implications.

You also have concerns around these systems being used in policing and being used by tyrannical leaders to suppress democratic movements and opposition and things like what we saw in Tahrir Square in the Arab Spring, where the military was ordered to attack the public and stood down and refused to do that, and it led to the fall of the regime. Robots aren't going to disobey orders that way, so there's a strong incentive there, and you could see these kinds of democratic protests being much more brutally suppressed by automated systems.

Then I think there's a set of legal and moral concerns. To the extent that these systems are delegated the authority to kill—and the operators may or may not know exactly what these systems are going to do because of their level of sophistication and autonomy and the range of area and kinds of engagements they can engage in over a period of time—you can't necessarily hold an operator responsible for everything that it does under the law because they can't necessarily predict everything that the system is going to do. To that extent, you can't really hold individual people responsible for certain kinds of war crimes that these systems might commit because they really don't have the knowledge of what that system is going to do.

It's very hard, of course, to hold companies or engineers or even states responsible. There is a general state responsibility for deploying the weapon, but they have a plausible deniability that it was going to create a genocide or a war crime or something like that. So it serves in that case to undermine international law and humanitarian law, the Geneva Conventions, and things like that.

Then there's the moral question of whether we allow autonomous weapons to decide who lives and who dies and what it means to delegate the authority to kill human beings to machines, and I think that strikes at the nature of human dignity and the value of human life. We permit killing in war in self-defense and things like that only because it's a human moral agent who is actively defending themselves or another human moral agent against a threat.

Machines aren't really individuals. They are not moral and legal agents. They can't really make the determinations of when it's necessary to take a human life. They don't really understand the value of a human life and the significance of taking human life, and I think it's thus a moral mistake to delegate that authority to them.

ALEX WOODSON: I think that's a really important point. You said in a talk that I watched: "It matters how one is killed. It has to be a human making the determination." It's a bit of an obvious point, but it really needs to be underlined I think in talking about this technology.

PETER ASARO: Absolutely. If you think about that in the context of larger arguments about artificial intelligence and ethics and things like that, we're going to have algorithms making decisions about your job and your home, where you get to go to school, and if we can't agree that machines shouldn't be making the choice of whether you live or die based on some set of mathematical criteria, then we're going to have a lot of trouble arguing and framing exactly what the limits are on these algorithms making all kinds of decisions about your life and your ability to review that and appeal that and the transparency and accountability behind all that.

ALEX WOODSON: What do you say to people who argue that these autonomous weapon systems could help limit civilian casualties? This is an argument that the United States made at a UN Convention on Certain Conventional Weapons meeting in Geneva in 2018. They listed a number of reasons why this could have humanitarian benefits, why this could limit civilian casualties and make war safer.

PETER ASARO: As a computer scientist and AI researcher, it's very compelling to think that we can frame a question of accuracy and precision of weapons and say: "Well, we're going to design this weapon so that it really avoids civilians and only kills bad guys, enemy combatants, and we just refine these algorithms over time, and eventually they'll be better than humans, and we'll have cleaner warfare. We'll have fewer civilian casualties, and everything will be great." Sure, as an engineering problem framed in a very narrow way like that, it seems plausible and conceivable that we could achieve a certain technical capability there, but that's very decontextualized with what's happening in warfare overall.

We already see this to a great extent with precision-guided munitions. It's quite easy to argue, "Of course, it's better to drop a single precision-guided munition to destroy a target than to carpet-bomb an entire city or area to try to hit the target, and maybe you don't even get it the first time, and you have to carpet-bomb several times." That seems like an obvious argument, but if we look at the history of how precision-guided munitions have been deployed, what it really ultimately does is reduce the cost per target of targeting things in a military conflict. What happens as you reduce the cost by, say, tenfold—which is essentially what they got out of switching from dumb bombs to smart bombs in the U.S. military—targeting lists grew ten times, so they're bombing ten times as many targets.

You have to actually do the math to figure out whether the incidental civilian casualties—even when you hit the right target, because there's still collateral damage; there are still civilians who are impacted even when you're hitting only military targets—whether ultimately you're impacting more civilians because you're going after ten times as many targets as you were when you were going after a single target by dropping a lot more bombs around it. So it's not so straightforward in that respect.

Then, of course, there are these larger questions, which are completely unaddressed by that: What does that do to the dynamics of warfare? What does this mean for arms races? What does this mean to introduce a new type of weapon of mass destruction? What does this mean for human dignity? What does this mean for international law and accountability under the international law of soldiers and officers?

Yes, maybe, but there's a lot more to worry about here.

ALEX WOODSON: We have gone through a lot of different concerns that you have about autonomous weapons systems. One that could be a huge concern I would think would be hacking into these systems. What are your specific concerns as far as that?

PETER ASARO: I think cybersecurity is becoming a big issue in international relations in general. At this point, we're concerned about civilian infrastructures and data and maybe command-and-control systems and things like that, communications systems. But once we automate violence in the systems of autonomous weapons, those systems can be hacked. That means they can be taken over and then can be turned against other targets, so you could turn them against the civilians of the country that built them, you could turn them against a third-party country, which opens up then this enormous Pandora's box of uncertainty. You can always then claim: "Well, we didn't launch this attack, even though our robots did it. Actually, they got hacked and taken over."

We see already in cybersecurity issues questions of attribution being very challenging. It looks like this country launched an attack, but it could have been made to look like they launched the attack, and it was really some other country that launched the attack. Doing all of the forensics to figure that out is still possible, but it takes a lot of work and takes time, and that makes diplomatic decisions and military decisions much more complicated to introduce again all of this uncertainty about attribution.

So the possibility of having unattributable kinetic attacks, including all kinds of things like assassinations, which we to an extent already have with drone strikes, but we're still pretty confident of who's launching them—it's usually the United States—but if many countries have this technical capability and start using it and start using assassination as a political tool more often, we could see it becoming much more common and much more difficult to actually identify who is behind what assassinations and why, again leading to all kinds of instability globally. I think that's very concerning.

You really have a situation where right now humans control everything. Do you want to create automated systems in which your vulnerability to cyberattacks becomes even more critical than it is already, even if you're confident that you can build encryption and things to try to protect your systems? They have had a lot of trouble even with the existing drone program of getting hardware that has been counterfeit. They have already had trouble with the existing Predator and Reaper drones showing up with counterfeit hardware that has been manufactured in China in different factories, where they don't even know where the chips and things are coming from that are going into these big military systems, and of course you can build backdoors and all kinds of secret components into silicon chips that you can later activate and use to compromise those systems or even take them over.

We saw Iran downing this U.S. surveillance drone a number of years ago using what they call "spoofing." A lot of these systems are using GPS, and if you get close enough with it and you broadcast a GPS signal, you can convince it that you're the GPS satellite, and then you can effectively control that system. You're not in direct control of it, but you can effectively control it, so they could crash-land it and recover it and study it and things like that. That kind of spoofing is also quite concerning.

What that would mean also if these sorts of systems are fielded in combat and where if you know how a certain system works, then you could create a beacon that essentially attracts some attack and put it on churches, schools, and hospitals to make your enemy look bad for attacking churches, schools, and hospitals. You could see those sorts of things happening. There are really a lot of I think serious concerns about how this is going to change the nature of warfare and global stability going forward, and hacking and cyber vulnerability is a big part of that.

ALEX WOODSON: Can you see a future in which the tech is refined in such a way that you would be comfortable with autonomous weapons systems, or would you just be completely against it on moral grounds, no matter what?

PETER ASARO: I think it's a design question ultimately. I have argued this in academic papers as well: If you actually develop a technology that's better at, say, distinguishing enemy combatants and civilians, great. Use it. Build that into all of your systems. But don't eliminate human control over the ultimate decision and supervision of those systems because basically you're building in sort of an AI safety, and if you point the gun at a civilian and pull the trigger, it doesn't fire. That seems quite acceptable and not a problem. You still have a human who is there in that process making that decision, but the system is then utilizing that enhanced capability to limit or augment what the human is able to do. But it's ultimately the human who is making the judgment that it's necessary to use lethal force in a situation or not.

ALEX WOODSON: I haven't done research on this specifically, but it would seem to me that a lot of the private sector, a lot of big defense companies, might have an interest in making this technology available. It could just be another way for them to make money. Have you seen that? Are companies taking the ethics into consideration when they're developing these systems, or do they just see it as, "Oh, we're making a self-driving car, and then we're going to make an autonomous weapons system"?

PETER ASARO: It's good to get away from a monolithic view of what the military is or wants or what the military-industrial complex, if you will, the military contractors, want. There is actually a lot of gradation. The CEO of [BAE Systems], the largest military contractor in the United Kingdom, was at the World Economic Forum three or four years ago saying, "We don't want to build autonomous weapons, but if states start demanding it and that's what the marketplace wants, then we'll produce them." We saw the CEO of Dassault, which is the largest French aerospace company, saying almost the exact same thing two years later at the World Economic Forum. We have seen other engineering companies say that.

To me it seems like it's the larger companies that are in a sense more responsible and don't really want to build these systems and are used to conforming to things like: "We're not going to build blinding lasers, and we know how to train our engineers to design systems that don't do that. We just need to know what the ground rules are, and if that's what states are buying, that's what we're going to build and sell."

A lot of the enthusiasm I think around autonomy is more of a start-up culture, which I think came right out of the drone start-up culture. You had all these quad rotors and things, and everybody and their brother was building these and making money, and it's like: "Okay, now we'll make them for the military. We don't really know what the application is exactly, but let's strap some sensors on it, or maybe we'll strap some weapons on it and fly around and do something and get a big military contract." I think that's where a lot of the enthusiasm is.

Similarly, within the military I think a lot of the enlisted and lower-level to midlevel officers who are actually in command of soldiers in the field don't want to see these systems used or developed. Of course, they love drones and the ability to stand off and protect their forces and do their jobs, but to actually remove their decision-making disempowers them on the one hand. On the other hand, if the adversary has these systems, it's going to put all their troops at risk and the civilians that they're trying to protect at risk. They seem also very unenthusiastic about these systems, and it's the niche maybe of upper-level decision makers who think, We're going to design all this stuff for the future, and then maybe they're ready to head off to a military contractor once they retire, who are the ones who are enthusiastic about developing these systems within the military, I would say.

ALEX WOODSON: Moving on to governance. We talked about this, and you brought it up a little bit in answer to my first question. Where are we in regard to domestic laws, international laws, in regard to autonomous weapons systems? If the United States, say, had this autonomous weapon system and we're deploying it in this war that we hopefully didn't start—say that something like that happens, that there is a traditional war, and the United States or any country brings out this autonomous weapons system, would there be recourse under international law? Would there be anything to say, "No, you can't do this. We're going to band together, sanctions, this or that"? Is there anything like that out there in the moment?

PETER ASARO: Under existing international law and the Geneva Conventions, there's Article 36 of the Additional Protocols to the Geneva Conventions, which is the requirement for the review of all new weapons, means, and methods of warfare. What this essentially becomes within the Pentagon is a legal review of all new weapons systems to ensure that they conform to existing international law. For the most part, that's a question of discriminate and indiscriminate use, which is actually kind of the subtitle of the CCW Treaty body, which is "conventional weapons of indiscriminate or disproportionate suffering" and things like that.

There is a weapons review, so if you use a weapon that is highly indiscriminate or can't be used at all in a discriminate manner, then that's prohibited. There are certainly autonomous weapons that would fall under that category. The question is whether more sophisticated autonomous weapons would be outside of that category and thus have some levels of discrimination—and arguably with these kinds of algorithms they might be highly discriminate in a certain sense—and whether those would be permissible and under what conditions would they not be permissible, and that's where it starts to get very complicated.

There is certainly a category of ones that are prohibited. Those are the really dumb ones that would drive around and shoot things indiscriminately, but then it's a question of what counts as discriminate. How do you test that? How do you ensure that when you test it in a desert and then you put it in a jungle, that it's going to work and responds to weather in the right kinds of ways when it encounters things that are outside of what it has been programmed to do, how is it going to react to that, which is something we find all the time in, say, self-driving cars. It turns out to be very difficult to understand, say, electric wheelchairs or cyclists who stand on their pedals and don't put their feet down at stop signs and things that are outside of the expectations of the people who built the system, and it takes a lot of trial and error and testing of those kinds of systems to figure out what that is.

With self-driving cars, you're talking about a highly constrained roadway system that has signs and signals and legal expectations of what you're supposed to do and a fairly limited number of things that are supposed to be on the road. That's a much simpler problem than thinking about warfare and combat, which is a very unconstrained space that behaves in ways that often diverge from your normal expectations of things.

I think it's really challenging in that respect to think what it would mean to be able to test something to ensure that it met the existing Article 36 reviews, and actually for the first few years the CCW discussions really focused on whether they were already prohibited or permissible and then to what extent does the Article 36 review effectively regulate or govern those systems. The problem is that it's a very simplistic kind of analysis in the end about whether ultimately can it be controlled and discriminate, or is it just inherently indiscriminate. Since it's not inherently indiscriminate, it's basically permitted, but that covers a huge range of things and doesn't require the kind of testing that you would want for a self-driving car. You would probably want more testing for a weapon, and that's just not required under Article 36.

Then you have the limited number of countries that actually conduct Article 36 reviews, most of which are not as sophisticated or rigorous as the U.S. version of it. That doesn't seem like a valid solution. Plus, it's all done by countries on their own. They don't produce any sort of transparency around how they review weapons, whether they ever reject weapons, or what the basis of rejecting weapons is, so we don't really know how effective any of those reviews might actually be.

ALEX WOODSON: What in your view should be done to make sure that this technology doesn't proliferate?

PETER ASARO: Ban them. That's the easy answer.

The simple thing is just to say: "Wait." There are so many reasons why we shouldn't have these systems, and the proposed benefits of them are so limited and in the relative scale of things not as likely to have a positive ultimate outcome as all of the negative things are to have negative outcomes that it seems wholly justified to prohibit these systems. Then it's a question of how do you do that? Of course, the best would be an international treaty that was agreed to by all the countries. Right now we're working on the CCW, which is 127 countries I want to say. Not all the countries are party to the CCW, and even if we get an additional protocol to that treaty not all the states would necessarily sign that.

The bigger issue in that context is that it's a consensus-based body, which means every state essentially has a veto, so any one state can derail the text or the whole protocol. So it has traditionally been pretty weak on developing prohibitions. There are prohibitions for blinding lasers from the 1990s and also landmines that proved to be ineffective, so they had to do an additional treaty to effectively ban landmines globally. So there's a worry there in terms of what kind of treaty we actually get.

Then there are arguments around whether you get a treaty like the Treaty on the Prohibition of Nuclear Weapons that doesn't have the United States, Russia, and China involved in it, and those are the main players and developers, then are you really regulating the technology or prohibiting the stuff you want to ban? I think that's an open question.

Of course, it would be better to get all the countries to agree, and honestly I think it's in the interests of the major powers to ban these weapons because I think a lot of the advantage ultimately is going to go to smaller states that are able to undermine the military capabilities of larger states using really cheap, small, numerous swarms of little robots and things like that. Can you defend an aircraft carrier as effectively against swarms of little robots or big, fancy jet fighters and things like that? It's really expensive to make another aircraft carrier or another jet fighter, but you can produce a lot of these little robots.

I think there is good reason for them to participate in that and, of course, to avoid another kind of global arms race and all of that, so I'm still hopeful that we'll get engagement from the major powers.

Then I think it's a question of the content. What does that treaty look like? Essentially, it's a prohibition on systems that are engaging and selecting targets without human supervision, so you're creating a requirement that you demonstrate what is the human control and supervision for each of these weapons—rather than trying to define technically what a system is that makes it autonomous, we say, "Well, you just have to demonstrate how the humans are in control of that system"—and then whether you need to verify these sorts of things and whether adversaries like the United States, Russia, and China are going to want the capability of verifying that their adversaries are fulfilling their treaty obligations. I think that's a question largely up to the states.

I think there are ways to do it. It's a little bit trickier than, say, nuclear weapons, which have a big industrial footprint and radiological and seismological ways of detecting them. These are going to be small systems and things that could be altered just through computer code, which makes I think verification more challenging, but I don't think it's impossible. Certainly in the cases of use, which is what we've seen largely with chemical munitions—if a state uses chemical munitions, that's when we punish them and sanction them. We could do something similar with autonomous weapons.

ALEX WOODSON: I know that you co-founded an organization devoted to this. Where are you right now in pushing for this treaty and making this a reality? Maybe you could talk a little bit about the organization as well.

PETER ASARO: The Campaign to Stop Killer Robots has been pushing for this international treaty for the past seven years at the UN level. We started with a report in the Human Rights Council and then moved into the Convention on Certain Conventional Weapons in Geneva. It has been not exactly stalled but has not been very productive for the past couple of years there. They just extended those negotiations for two more years, so it will be at least two years before they act, which is disappointing because this is an issue that is only moving more and more quickly as time goes by, and they haven't set aside a lot of time. It's like three weeks each year to discuss the issues, which doesn't give you enough time to really develop the kinds of consensus that you need around treaty language.

But we're still hopeful and glad that they're producing these discussions. Whether some other part of the United Nations will pick that up—or there might be like there was with cluster munitions and landmines, an outside process, an Oslo or Ottawa process sort of scenario.

Right now we're gearing up for our international campaign meeting in Buenos Aires in a few weeks. We have campaigners from 130 NGOs and over 60 countries around the world, and they're all working with their local national governments to push for action at the United Nations. We have had a number of states within the CCW calling for a ban. Then we have 100-some countries calling for some kind of treaty or regulation within the CCW to do something.

Many of the NATO powers have been arguing for something like a political declaration or non-binding kind of statement coming out of the CCW. If we don't have a legally binding treaty, that's not really going to be very effective, and we have seen with AI more generally that self-regulation is not the best route to go.

That governs international armed conflict in states that are parties to the CCW and things like that, but there are still questions about whether these kinds of autonomous weapons systems will be used by police forces. We have situations like with tear gas where it's actually banned in international armed conflict under the chemical weapons ban, but police forces use it all the time for crowd control. We want to also be sure that whatever treaty we have would cover police uses, border control uses, and things like that that fall outside of armed conflict.

Whether that also then requires national laws that would prohibit bodyguard robots or something that would beat people up or kill then, you would also want those sorts of laws in place. Again, you get into arguments about to what extent do existing laws already cover that. There have been famous court cases in the United States of people who have set up booby traps on their doors and things like that that are found responsible, of course, for the killings that happen.

ALEX WOODSON: There's a lot to think about there.

Just a last question. You said your organization was founded seven years ago.

PETER ASARO: My NGO, which is the International Committee for Robot Arms Control, was founded in 2009, and then the international Campaign to Stop Killer Robots was founded in 2012.

ALEX WOODSON: Since 2009 or 2012—I'm not sure I thought about this much back then, but in the last few years I have definitely seen a lot more about this issue. We have had events at Carnegie Council about it.

Just in terms of the general public gaining an understanding of this issue, how have you seen this conversation change? Have you seen people as they learn more about it say, "Oh, this is terrible; we can't let this happen" or "That's kind of interesting; this might have benefits for civilian casualties and things like that"? How have you seen the conversation change over the last five to six years?

PETER ASARO: I think initially, especially in that 2009–2012 period, the first thing that came to mind when you say "killer robot" is a drone, where you would actually have a human remote operator. So we actually had to spend a lot of time explaining, "Well, yes, think about a drone, but think about replacing that remote operator with a piece of software." That's what we're talking about. It's really the next generation of drones.

I think now that these small drones have become so ubiquitous and self-driving cars have become a thing, people are much more aware of what the implications of that are. I think there have also been some pretty big revelations between Snowden and the National Security Agency revelations from his leak and more recently with social media influence on political campaigns, foreign influence, and just the collection and surveillance of people's lives through smart devices, phones, cameras, doorbells, and all sorts of things, people are much more aware I think today of the implications of automating violence and tying it into a system that is collecting all of this data about us. Initially, even then people were scared and wanted to do something about it, but I think right now it's much more realistic and plausible to them, so I think that's helpful.

Governments have moved much more slowly, so convincing diplomats and politicians that this is an important issue—they see it as a critical issue, I think, but they're much less willing to do anything about it and I think that's where the frustration lies.

ALEX WOODSON: Thank you very much, Peter.

You may also like

MAY 13, 2022 Article

Ethics As We Know it is Gone. It's Time for Ethics Re-envisioned.

Given the troubling state of international affairs there is reason to be greatly concerned about how ethics is framed or co-opted. To meet this moment, ...

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

CREDIT: Abobe/hamara.

SEP 25, 2024 Article

Politico Op-Ed: Walking a Fraying Nuclear Tightrope

In a new op-ed, Carnegie Council President Joel Rosenthal argues that a recommitment to nuclear arms control is nothing short of a moral imperative.