Wendell Wallach, co-director of Carnegie Council's Artificial Intelligence & Equality Initiative, was among the keynote speakers at the REAIM Summit in the Netherlands on February 15-16, 2023. During his address he outlined why the logic supporting the development and deployment of autonomous weapons system (AWS) is a continuation of the escalatory deterrence strategy that characterized the Cold War, and fails to grasp how such systems will change the conduct of warfare. "Those security analysis who have looked seriously at the challenges inherent in introducing AWS into warfare have come to appreciate that it is deeply problematic. Meaningful human control of AWS appears impossible." Wallach expanded on his thoughts in this essay below.
In which Walt Disney film was there a mass suicide? Even older trivia players have difficulty with this question. The answer is Disney's nature documentary The White Wilderness (1958), which captured large herds of lemmings jumping off cliffs into the sea to what was proclaimed as certain death. Although a portion of the footage was staged, overpopulation can in fact drive lemmings, who can swim, to jump into the sea in search of a new home. In the process some lemmings drown, but their behavior is not suicide.
On February 15-16, 2023, the Netherlands hosted in The Hague delegates from more than 60 countries, including business and military leaders, and scholars, for the REAIM Summit, focused upon responsible AI in the military domain. After eight years of multilateral discussions, agreeing on legally binding rules to limit the development and deployment of autonomous weapons systems capable of selecting and destroying human targets has failed. In response, REAIM was organized to further talks and initiate a new process to find agreement on appropriate guidelines, restrictions, and global governance of AI use in military affairs.
The Summit covered many topics from the use of AI for decision support to logistics planning, but most in attendance were particularly concerned with autonomy for weapon systems. Speakers from weapon manufacturers assured those in attendance that AWS would be thoroughly tested. Generals proclaimed that there would be meaningful human control because AWS would only be deployed by commanders who were responsible and accountable for their use. They argued that AWS were essential because the speed of combat was outpacing the abilities of even trained personnel to respond quickly enough. On the surface such arguments appear reasonable, but if understood considering the features AI introduces into warfare, they are naïve and place humanity on a suicidal trajectory.
In 2015 at the UN in Geneva, the phrase "meaningful human control" was introduced by the nonprofit advocacy group Article 36 during informal meetings about whether to restrict the development of AWS. By 2016, all parties were using the phrase, but it soon became clear that they were talking about very different concerns and approaches.
During the intervening years, international security and military analysts who have looked seriously at the challenges inherent in introducing AI and AWS into warfare have come to appreciate that it is deeply problematic. To ensure meaningful human control throughout the lifecycle of a munition would require vast infrastructure at great cost and appears difficult if not impossible to implement. For example, AI systems with any capacity for learning (Machine Learning) would require near-constant testing. New inputs can lead to changes in behavior and the likelihood of acting in an unforeseen manner. Meanwhile, using off-the-shelf software and components developed by tech companies for non-military applications, AWS will quickly proliferate to smaller countries and non-state actors with limited resources.
As users of ChatGPT and other generative AI applications such as Microsoft's new version of Bing are learning, AI can produce astounding results, but on occasion it is just plain stupid. AI systems are probability machines that will at times act in unanticipated ways, particularly when deployed in complex environments such as warfare. Low-probability outputs or actions are often insignificant, but occasionally a low-probability event will have a high impact. Probability multiplied by impact equals risk. An AWS with a large payload is inherently risky. As Nassim Taleb has tirelessly pointed out, low-probably high-impact events are much more common than we naturally recognize.
Imagine that a commander, faced with a serious quandary to save lives amidst time pressure, deploys an autonomous drone carrying a high-power munition. Given the fog of war, commanders seldom know the actual probability of success, but for the sake of argument let us presume the commander knows that there is a high 87 percent likelihood of achieving the goal and save lives, while only a 13 percent probability of failure. But unbeknownst to him there is a 1 percent likelihood that the AWS will escalate hostilities in an unintended manner. Will commanders making the final decision be held accountable should the escalation ensue? Probably not given that they can hide behind national security considerations and the high expectation of success.
During the Cold War, deterrence logic fed the adoption of advanced technology from supersonic jets to nuclear submarines and a never-ending growth in warheads. Deterrence logic began to break down when, for example, Trident submarines able to navigate under the Arctic shelf shortened the time between the launch of nuclear missiles and bombs destroying Moscow. The shortened time introduced by this new technology would be inadequate for Soviet leaders to properly assess whether they were indeed under attack. As a result, the prospect of automating a retaliatory nuclear response or even launching a preemptive strike became increasingly likely.
Today, China and America, two countries leading research and development in military AI applications, are slowly ratcheting up tensions. Deterrence logic is seemingly back. Digital technology is being rapidly adopted and embedded in military applications. Over time, however, escalating reliance on AI will by necessity evolve into the dilution of meaningful human control and total abrogation of human responsibility. AI does not know when it has made a mistake. But we humans do have the capacity to recognize when we are on a suicidal course.
Like the lemmings, defense and security strategists are not acting out an unconscious death wish. They have taken on the responsibility of discouraging warfare and finding a way that their nation might survive should a war occur. They know that deterrence logic is flawed and that speeding up the pace of combat and introducing AI decision-making to compensate cannot be meaningfully controlled. However, they do not know what else to do. There is little confidence that a ban on AWS will work because of difficulties in putting in place effective compliance and verification.
Nevertheless, the time has come to acknowledge the trap we have created and turn away from reliance on increasingly risky defense strategies. The U.S. has joined the REAIM process and, despite its omission of concrete references to AWS, released a political declaration clearly acknowledging the many ways autonomous features in warfare are problematic. Let us not impede what may very well be the last opportunity to put in place the less risky standards and global governance needed to limit the use of AWS in the military domain.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.