As AI systems are being leveraged and scaled, frequently calls are made for, “meaningful human control” or “meaningful human interaction on the loop.” Originally an engineering term, requiring a human “in the loop” who could halt systems, the term “on the loop” or “nearby the loop” has been co-opted by policymakers in order to distance humans further from systems. We think that putting into practice the notion that any human in, on, or nearby the loop would be able to oversee complex systems, particularly those including AI, could lead to potentially disastrous consequences.
In 1983, Stanislav Petrov was one such “human in the loop” of the Soviet Air Defence Forces. Petrov’s job was to use both his technical insights and human intelligence to alert the Soviet leadership should data support the launching of a nuclear weapon. When false alarms alerted Petrov of an imminent nuclear attack, his understanding of the limits of machine-induced analysis caused him to assess a malfunction – halting what could have been a nuclear disaster.
Petrov had 30 minutes to make a decision. Today’s automated systems make decisions in milliseconds. The way that we are talking about “meaningful human interaction on the loop” in relation to AI systems today is problematic, if not fundamentally flawed, for the following reasons:
- You cannot have “meaningful interaction” with data or sensors, or actuators at the time of data collection and operation. What is the loop? It is the whole system – the sensors, the actuators, the data (mostly historical, often poor quality, almost always difficult to interrogate), the machine learning or AI, the pieces separately, and the interoperable whole. No single human has the capacity to understand and oversee all of these parts, let alone to meaningfully intervene.
- You cannot “meaningfully interact” with active code. Coding is a very specialized skill requiring expert knowledge. In the case of the code being embedded in other systems, there are few people, if any, who are in positions to review these elements in real time (particularly those with relevant security clearance). The skill sets needed to interrogate the systems are required both at the point of build, and regularly throughout the lifecycle of these systems, as they will inevitably fail or have unintended outcomes.
- People get bored when working with autonomous systems. The situations where machines can be autonomous, but require human supervision, are often the most dangerous. Humans tune out and get bored or distracted – with disastrous effects. Research data shows that humans cannot actively supervise machines for long periods of time without risk increasing, particularly where the systems are largely autonomous. Part of this risk also links to Weizenbaum’s magical thinking: Humans often assume that systems cannot fail – and yet they do.
- The complexity, speed, and scale of many autonomous, and even automatic, systems do not allow time to challenge them. The speed at which information is provided, and the time-sensitive decisions need to be made, will often render a potential appropriate human intervention impossible.
- If “meaningful human interaction on the loop” is remote, there are even greater risks. Network delays (due to issues of bandwidth, lag time, human cognitive delays, and information poverty – not having all the information you need, some of which cannot be captured by automated or autonomous projects) amplify existing risks. Black boxes and lack of transparency prevent predictability. Even where low risks exist, the speed and scale of autonomy may expedite or expand the potential of serious harm.
- The term “human” usually means a fairly narrow type of human, which is not representative of broader humanity or humankind. The human is often the most overlooked term in this phrase, and yet it does a lot of the heavy lifting. The humans who have access to, and understanding of, these systems are a particularly narrow group with a particular worldview and set of values. They do not necessarily understand or represent those affected by the automated systems and are not always listened to in the face of commercial imperatives. Petrov had the power to override the system – would someone today be able to do the same?
- Skills required to understand complex adaptive systems will be lost with increased automation. Those with the ability to understand and question the flow of the entire system, including whether it is performing ethically and in line with responsible codes of conduct, are increasingly sidelined or their skills are lost. Interdisciplinary skill sets will not be easy to replicate or replace in the long-term. Complex adaptive systems can at times function in ways that cannot be predicted, even by those with the correct skill sets. The likelihood of low-probability high risk (black swan) events will increase as more automated and autonomous systems are connected and scaled.
- You cannot have “meaningful interaction” with data or sensors, or actuators at the time of data collection and operation. What is the loop? It is the whole system – the sensors, the actuators, the data (mostly historical, often poor quality, almost always difficult to interrogate), the machine learning or AI, the pieces separately, and the interoperable whole. No single human has the capacity to understand and oversee all of these parts, let alone to meaningfully intervene.
Each time you hear about “meaningful human interaction (or control) on the loop,” challenge what this means in the context of a specific system. More useful would be to ask whether automation or autonomy is in fact the right choice for what the problem is. Is automation suitable, legally compliant, and ethical? Having experts with diverse and interdisciplinary skills involved throughout the development and life cycle of a solution and system directed at solving a challenge will have far greater impact than any “human on the loop.” After the system is active, just being “on the loop” will almost certainly not provide the capabilities for any human to pause, reflect, question, and stop the trajectory of the machine, as Petrov was able to.
Dr. Kobi Leins is a visiting senior research fellow at King's College, London; expert for Standards Australia providing technical advice to the International Standards Organisation on forthcoming AI Standards; co-founder of IEEE's Responsible Innovation of AI and the Life Sciences; non-resident fellow of the United Nations Institute for Disarmament Research; and advisory board member of the Carnegie Artificial Intelligence and Equality Initiative (AIEI). Leins is also the author of New War Technologies and International Law (Cambridge University Press, 2021).
Anja Kaspersen is a senior fellow at Carnegie Council for Ethics in International Affairs. Together with Senior Fellow Wendell Wallach, she co-directs the Carnegie Artificial Intelligence and Equality Initiative (AIEI), which seeks to understand the innumerable ways in which AI impacts equality, and in response, propose potential mechanisms to ensure the benefits of AI for all people.