Drone in Afghanistan, 2009. CREDIT: <a href="http://www.flickr.com/photos/david_axe/4094266433/">David Axe</a> (<a href="http://creativecommons.org/licenses/by-nc/2.0/deed.en">CC</a>)
Drone in Afghanistan, 2009. CREDIT: David Axe (CC)

The New Assassination Bureau: On the 'Robotic Turn' in Contemporary War

Nov 6, 2012

A recent issue of The Economist featured an eye-catching cover entitled "Morals and the Machine." The article of the same name opens by outlining the plot of the film 2001, in which a spaceship's computer, HAL, faces a dilemma. His instructions require him both to fulfill the ship's mission and to keep the true purpose of the mission secret from the ship's crew. To resolve the contradiction, he tries to kill the crew.

The article concludes that:

As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.1

Using this as a starting point, our essay focuses on the use of machines not in the day-to-day use of technology—though ethical and legal dilemmas created by increasing automation abound there as well—but in one specific area of contemporary international relations—what we call the 'robotic turn' in contemporary warfare.

As The Economist article points out:

Military robots come in an astonishing range of shapes and sizes. DelFly, a dragonfly-shaped surveillance drone built at the Delft University of Technology in the Netherlands, weighs less than a gold wedding ring, camera included. At the other end of the scale is America's biggest and fastest drone, the $15m Avenger, the first of which recently began testing in Afghanistan. It uses a jet engine to carry up to 2.7 tonnes of bombs, sensors and other types of payload at more than 740kph (460mph).2

Such devices are remaking modern conflict and opening up the prospect of a battle space that resembles, as a Huffington post article recently suggested, the Terminator movies.3 Indeed, as its author, Nick Turse (co-author, with Tom Engelhardt of a new book on 'Drone warfare'4) points out:

[the] Unmanned Systems Integrated Roadmap, FY 2011-2036, a recently released 100-page Defense Department document outlining American robotic air, sea, and land war-fighting plans for the decades ahead (is almost) the sunny side of a future once depicted in the Terminator films in which flying hunter-killer or "HK" units are sent out to exterminate the human race.5

Leaving films aside, there is a good deal that might be said about the technological determinism (and almost blind utopianism) that goes into the assumptions here. But in this essay we focus largely on what we might call the 'praxeological'6 dilemmas that some of these technologies bring about—that is to say the practical ethico-political-legal dilemmas.

Our argument is thus divided into three sections. The first looks at the practice of 'robotic' warfare. A second looks at a number of separate ethical and legal dilemmas raised and a final section examines what we think is the most problematic area of contemporary war—the use of drones. Our specific concern is the issue of agency in conflict. In this final section we hope to show why we believe this essay's rather fanciful title is justified and why, alas, it is more than just a fancy.7

Practice

The use of drones is increasing. Initially used for reconnaissance purposes, drones began to be used in combat missions after President Bush's Secret Memorandum of Notification authorizing the CIA to kill members of al-Qaeda in what was rather quaintly termed 'anticipatory self-defence.' Drone technology is now possessed by the U.S., the UK, China, France, Italy, Iran, Israel, Russia, South Korea, and Turkey, with others about to join the club, though only the U.S., the UK, and Israel have armed drones that have been used in combat. But that will surely not persist for long and it will obviously not just be states that acquire such technology or deploy it.

The growing sophistication of the technology and its increasing use promote the assumption that drones are adding to the 'humanization' of war (as Christopher Coker might put it8). Evidence of this view is omnipresent in military and political circles where the effects of drones on reducing combat deaths on 'our side' are emphasised. In so many ways, drones can reduce the risks of conflict for liberal states which, after more than a decade of the 9/11 wars and the generally recognised failure of COIN, understandably would rather use drones than boots on the ground. Indeed the argument that drone use prevents risking soldiers or indeed civilians in messy COIN activities seems compelling. The capacity of such technology to be ever more precise in targeting the enemy is often lauded, as in Kenneth Anderson's claim that "drones are a major step forward towards much more discriminating use of violence in war and self defence—a step forward in humanitarian technology."9 An additional and related argument, widely used by the Obama Administration, is that drones provide a massive step up in intelligence capability, perhaps avoiding using human intelligence gathered at some risk in hostile environments. This has been an argument for the widespread use of drones in Yemen since March 2012 for example, and also for intelligence gathering on Iran's nuclear progress. Added to this is the pragmatic issue of cost. A Predator drone costs around $4.5 million, while a fighter jet is around $150. Drones seem to provide value for money.

So, there seem to be strong justifications for the growing use of drones in contemporary zones of conflict and contention. After all, if the practice of lethal force for what we presumably think of as justified and legitimate goals is ongoing, we surely want our practice to be as effective and efficient as we can make it—and drones it would appear, help to do that.

There is a counter-view, however. This is simply the fact that, in practice—and in many different contexts—far from drones being an aid to military and political effectiveness, they are a very blunt weapon indeed and might actually be a hindrance. We need to remember that the use of military force is, as every student of Clausewitz knows, fundamentally about political effectiveness not just (or merely) military effectiveness. But as the growth in so-called 'collateral damage' as a result of U.S. drone strikes in Afghanistan and Pakistan shows, the technology is not always good at sorting out the combatant wheat from the non-combatant chaff; and when it fails to do so the political—as well as the human—consequences can be very severe. As David Kilcullen pointed out in his testimony to Congress in April 2009, drone strikes often "give rise to a feeling of anger that coalesces the population around the extremists."10 In other words, the political costs of using the technology may well outweigh any military benefits. As drones become more common in the practice of modern war, the result can be readily imagined—a perfect storm of resentment and grievance that is likely to sour relations between allies (as it has done between the U.S. and Pakistan, though there are certainly other reasons for that as well), worsen and prolong conflicts, and impede the prospects of diplomacy and conciliation. Thus, even at the practical political level it would seem that drones are, to put it charitably, a double-edged sword. To return to The Economist's article:

In the air, on land and at sea, military robots are proliferating. But the revolution in military robotics does have an Achilles heel, notes Emmanuel Goffi of the French air-force academy in Salon-de-Provence. As robots become more autonomous, identifying a human to hold accountable for a bloody blunder will become very difficult, he says. Should it be the robot's programmer, designer, manufacturer, human overseer or his superiors? It is hard to say. The backlash from a deadly and well-publicized mistake may be the only thing that can halt the rapid march of the robots.11

Another aspect of this is the extent to which the use of drones is context-specific but leads to almost inevitable 'spillover.' In Yemen, for example, drones are being used at the express invitation of the government, which has effectively outsourced its counter-terrorism against key AQAP [al-Qaeda in the Arabian Peninsula] leaders to U.S. technology. Whereas in the context of Pakistan, the government is (at least publicly) opposed to the use of drones over sovereign Pakistani territory, despite its very close military/political/financial ties to the U.S. Neither case, it seems to us, necessarily bears out the claims of the supporters of drone use, but the arguments that are relevant in each case are, of course, very different.

In the Pakistani case, for example, there is a good deal that might be said about the use of drones outside the framework of international law which might not be the case, at least formally, in the context of Yemen, whose sovereign government has actually initiated the drone strikes—though one might ask how sovereign the Yemeni government actually is in practice. But even if we disregard this, there are very real questions as to the use of drones in so-called targeted assassinations, both as to the political consequences of such actions (the 'collateral damage and political effectiveness' problem discussed above) and the ethical implications for the proper conduct of war or foreign policy behavior generally. Drones have been used to kill over 3,000 people designated as 'terrorists'—a categorisation that is to put it mildly, stipulative, as everyone, aside from the American Government, seems to realize. And, although statistics vary, drone use also appears to have killed more than 800 'civilians.' Thus President Obama's drone program is a huge unmanned aerial offensive, with an astonishingly high rate of killing non-combatants.

Ethics and Law

This brings us, of course, to the ethico-legal dimension of the use of drones, or what we will call the praxeological aspects of using drones, where the issues become murkier still. To begin with, we suggest it is at least prudent to separate out the legal and the ethical while acknowledging (in opposition to a strict Hartian positivism)12 that they are ineluctably linked. In part this is because there is a marked tendency, perhaps especially among international lawyers, to assume that what is illegal is, qua illegal, also immoral and vice versa.

Mary Ellen O'Connell's argument13, for example—to wit that drones are permissible (in theory at least) in combat zones but not so outside them as no state of war exists and thus the laws of war do not apply—is, we suggest, a good example of this and, of course, true as far as it goes; but the point is that those who advocate the use of drones against 'terrorists' in (say) Waziristan, Pakistan would argue that the political/ethical requirements of U.S. national security should trump the, in any case debatable, claims of international law in this context. (So the CIA conducts drone strikes in areas where the United States is not officially at war in Yemen, Somalia, and of course Pakistan). In other words, they are making an 'ethical' (or ethico-political) claim rather than a merely jurisprudential one and arguing (effectively if not always explicitly) that the ethico-political imperative trumps the narrowly jurisprudential one. Thus, the appropriate response to this claim cannot be simply to argue the case in terms of jurisprudence (or at least not in the first instance) but rather to argue, on the contrary, that the ethico-political interest of the United States either a) should not be allowed to trump international law in this case or b) that it is false to claim that in fact the extrajudicial killing of 'terrorists' outside formal war zones actually does constitute a defensible ethico-political claim, or both.

O' Connell would certainly argue a)—and has done so at length in a recent book14—but we suggest that it is at least as important to make the second case; partly for the reasons suggested in the first section above but also for distinct ethical reasons in the context of the just war tradition. One of us (Rengger) has made a case similar to this already in a forthcoming book,15 so we will not repeat that argument in detail here, but merely emphasize that there are strong ethico-political reasons, and not merely jurisprudential ones, for suggesting that the use of drones, at the very least in the manner in which they have currently been used, raises very severe ethico-political questions.

A second issue here might be the extent to which, certainly under the Obama administration, drones have become a massive instrument for targeted killing, the legal basis of which, in any context, is at best, highly debatable and the ethical implications of which are surely profoundly disturbing. Think about the context of President Obama himself drawing up a list of those who should be targeted and killed (a so-called 'Kill list' with the names of alleged al-Qaeda and AQAP insurgents)—often when they are pursuing nothing remotely connected to military operations against U.S. or allied forces. A central plank of a good deal of international humanitarian law, after all, is the distinction between the 'corporate' persons of soldiers as combatants and their 'individual' status as humans, and it has been a commonplace of writing on just war for many decades that one can kill soldiers engaged in combat activities but not when they are not so engaged (when they are prisoners of war, for example, or on leave with their families). Drones (and their programmers) cannot make such fine distinctions—and, we cannot help but adding, nor, it seems, can President Obama.

An Assassination Bureau for the 21st Century?

It is in this sense, we argue, that the praxeological issues discussed above point to the real heart of the issue that the 'robotic turn' in contemporary warfare raises. And notwithstanding what we have said above, this is not to be found in the manner in which drones are used or even the consequences of their use—however problematic or calamitous—but in the simple fact of what they are. As P. W Singer quite rightly emphasises, 'the introduction of unmanned systems to the battlefield doesn't change simply how we fight, but for the first time changes who fights at the most fundamental level. It transforms the very agent of war, rather than just its capabilities' [emphasis added]16

And this matters in the current context because the 'ethical character of war' depends upon the sense that it is human beings that fight it. Consider the debate (for example) over the extent to which militaries should be allowed to prioritize the defence of their own soldiers against damage to a civilian population (a debate which was central to discussion over the actions of the Israeli defence forces against Hezbollah in 2006, but which was also central to NATO policy in Kosovo in 1999). Whatever position one takes on this it is clear that the relevant criterion is thinking about the importance of one set of human beings in conflict against another set. If you remove the 'human' from this it is simply no longer possible to really think properly about this. The very crass (and one is afraid to say, all too predictable) use of the term 'bug splat' to denote the killing of a 'terrorist' by drone strike obviously dehumanises those killed, and is clearly meant to.

Clearly this issue is already raised by drones now, but will become more pressing still when the next generation of drones—which if reports are to be believed will be completely autonomous and not controlled by remote operators—appears. The whole point about seeing the realm of war as an ethical realm at all is to see it in terms of the human character of that realm; the moral life is a life inter homines.17 The robotic turn threatens that balance at its very heart. Those of us that care about the restraints on war that its ineliminably human character throw up—to be sure flimsy and perpetually over-ridden—should be profoundly concerned about the increasingly casual deployment of technologies that remove those restraints, not just contingently but necessarily.

It will be said, of course, that the benefits of drone technology are such as to render such philosophical objection null and void. But at the very least, we should be aware of the path we are starting down and do so in full knowledge of the possible consequences. In his book on 'post human war' in the 21st century, Christopher Coker points out that what we are witnessing is the slow elimination of the characteristics that have marked out the idea of the warrior over the centuries and across cultures. The idea of the warrior, he suggests (echoing a phrase of Michael Ignatieff) is inseparable from the idea of the warrior's honor. But the warrior's honor depends upon the possibility of looking your enemy in the face—depends, indeed, on the assumption that the enemy has a face. But drones have no 'faces.'

The point here is to emphasise that almost the whole of our conceptual understanding of war—ethical, political and, if you like, existential—depends on this fact: that 'War' is a distinctively 'human' activity. The destruction of one ant colony by another is precisely not war, because ants do not think and feel and conceptualize. The widespread use of drone technology, in other words, threatens to tip our conceptual understanding of war into incoherence. More, perhaps, than all of the praxeological and practice-oriented problems we have raised above, for all their importance, it is this that ought to give us the most pause.

In his story Assassination Bureau, Ltd, whose title we borrowed for this essay, Jack London imagines a society devoted to killing for good causes—the 'targets' must be truly deserving of death. But as the (unfinished) plot unfolds, Dragomiloff, the founder of the society, now engaged in the killing of his former colleagues, begins to realize that such 'alleged' high-mindedness in the end becomes simply one more excuse for killing—both for him and for his former colleagues. Do we really want to turn war making in our age into a new 'assassination' bureau carried on with all the savage inventiveness 21st century technology can provide? Surely the dangers of that far outweigh any perceived benefits?

NOTES

1. See http://www.economist.com/node/21556234 accessed 8/6/2102

2. See http://www.economist.com/node/21556103 accessed 8/6/2012

3. See http://www.huffingtonpost.com/nick-turse/obama-drones_b_1558965.html accessed 4/6/2012

4. Terminator Planet: The First History of Drone Warfare 2001–2050 (London: Despatch Books, 2012).

5. See http://www.huffingtonpost.com/nick-turse/obama-drones_b_1558965.html accessed 4/6/2012

6. This term was used, in much the same way as we use it here, by Raymond Aron in Paix et La Guerre (Paris: Gallimard, 1960).

7. The Assassination Bureau Ltd, was, of course, an unfinished novel written by Jack London. It was also made into a film in 1969.

8. See Christopher Coker Humane Warfare (London: Routledge 2002).

9. Cited by Daniel Brunstter and Megan Braun, 'The Implications of Drones on the Just War Tradition' in Ethics and International Affairs 25, 3, fall 2011 p.357

10. 'Effective Counterinsuergency: The Futre of the US Pakistan military partnership' H Jearing of the House armed services committee April 23, 2009. Also cited by Brunstter and Braun, p.358. Although Kilcullen's argument have also been strongly criticised [BY WHO]

11. see http://www.economist.com/node/21556103 accessed 8/6/2012.

12. The locus classicus is, of course, H.L.A.Hart, The Concept of Law (Oxford: Oxford University Press, 1960)

13. Hearing before the subcommittee of national security and foreign affairs, 111th Congress, 2nd session, Arpil 28, 2010. Cited in Brunstetter and Braun pp 344–345.

14. Mary Ellen O'Connell, The Power and Purpose of International Law (Oxford: Oxford UNIversity press, 2010).

15. Nicholas Rengger, Just War and International Order: The Uncivil Condition in World Politics (Cambridge: Cambridge University Press, forthcoming 2013).

16. P.W.Singer Wired for War: The Robotics Revolution and Conflict in the 21st Century (New York: Penguin, 2009).

17. We are paraphrasing Michael Oakeshott's essay on Hobbes. See 'The Moral Life in the Writings of Thomas Hobbes' in Rationalism in Politics (London: Methuen, 1962).

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation