Skip to content
Commentary | 12 April 2023

The challenges of AI command and control

Over a year into the catastrophic humanitarian consequences of war in the heart of Europe, we are constantly reminded of the costs of military conflict. Once the fighting in Ukraine ends, we will face the prospect of asking hard questions about the lethal strategic and tactical decisions made, and begin assigning responsibility for those choices. Traditionally, these ethical dilemmas surrounding decision-making during war have been centred on discussions about human judgement, but with the advent of ever more advanced artificial intelligence (AI) technologies, new questions arise. Optimists might argue that machines couldn’t have done a worse job than humans. Can (artificial intelligence) AI-powered systems replace human commanders? And, more importantly, should they? In a recent article, I argue that AI is not only a powerful force multiplier in modern warfare, but potentially a strategic actor on its own.

As human-machine interactions become increasingly entwined into modern conflict, assigning responsibility for warfare choices will become tactically, technically challenging, and ethically charged.

The in-the-loop, off-the-loop false dichotomy

Much of the recent literature has revolved around two competing schools of thought. Those that argue that powerful computers have ushered in an era of automation that portends dramatic improvements in precision, speed and reliability that will not only make warfare more humane and safer, but also offer a solution to humans’ cognitive and biological weaknesses in combat. While others argue that the new breed of autonomous weapons (or “killer robots“) and AI-enabled machine overlords lack common sense in novel situations and moral responsibility in using lethal force, and thus to avoid potential catastrophe, the whole project should be abandoned. These juxtaposed views both have merit. Considerations of the potential benefits and risks of autonomous weapons must, therefore, consider and calibrate according to the likely circumstances for use.

As military AI advances, three trends appear clear. First, the drive to synthesise AI technology with military capabilities is irreversible and exponential. Second, the effects of this phenomenon on human agents in war are neither incontrovertible nor pre-determined. Finally, machines cannot reliably complement or augment, let alone replace, the role of humans in command-and-control decision-making. Automating the decision-making OODA loop with the help of AI is, I argue, a bad idea. AI is not a passive or neutral actor; if we automate the decision-making loop, AI will become (either by conscious choice or perhaps inadvertently) a de facto strategic actor in war – the “AI commander” problem.

Can (artificial intelligence) AI-powered systems replace human commanders? And, more importantly, should they? James Johnson

An AI command-and-control scenario: Conflict in the Taiwan Straits 2027

How might AI-augmented human-machine teaming affect a crisis between two nuclear-armed adversaries? Consider the following fictional vignette. In 2027, the ailing helmsman President Xi Jinping keen to fulfil his “China Dream” and secure his legacy in the history books, invades Taiwan.

“Operation Island Freedom”

Chinese Air Force stealth fighters (“Mighty Dragon”) flanked by a swarm of semi-autonomous AI-powered “loyal wingmen” drones (“Little Dragons”) deploy cyberattacks and missile strikes to destroy Taiwanese air defences and their command-and-control infrastructure. A semi-autonomous loitering “barrage swarms” soaks up and eliminates the bulk of Taiwan’s remaining missile defences, leaving Taipei virtually defenceless against a Beijing-imposed military quarantine.

Amid this blitzkrieg attack, the “Little Dragons” receive a distress signal from a swarm of autonomous underwater vehicles – on a surveillance and recon mission off the coast of Taiwan – warning them of an imminent threat posed by a US carrier group. With the surviving swarm running low on battery power and communications with China’s command-and-control out of range, the decision to give the engagement order is left to the “Little Dragons.” This decision is made without human input or oversight from China’s naval ground controllers.

On a routine patrol of the South China Seas, the USS Ronald Reagan’s anti-drone defences detect aggressive behaviour from a swarm of bulky Chinese torpedo drones. As a pre-emptive measure, the carrier uses its torpedo decoys to draw the Chinese drones away from the carrier group and then attempts to destroy the swarm with a “hard-kill interceptor.” The swarm delivers a punishing blizzard of kamikaze attacks, neutralising the carriers’ defences and rendering it hors de combat. Despite these countermeasures, the carrier group cannot destroy the entire swarm, leaving it vulnerable to the remaining drones, which now head full speed for the mother ship.

In response to this bolt-from-the-blue attack, the Pentagon authorises a B-21 Raider strategic bomber on a deterrence mission to launch a limited conventional counterstrike on China’s Yulin Naval Base, Hainan Island – housing China’s submarine nuclear deterrent – designed to degrade but not decapitate Chinese command-and-control. The bomber is supported by a swarm of “Little Buddy” unmanned combat aerial vehicles, fitted was the latest “Skyborg” AI-powered “virtual co-pilot”, affectionately known as “R2-D2.”

AI is not a passive or neutral actor; if we automate the decision-making loop, AI will become (either by conscious choice or perhaps inadvertently) a de facto strategic actor in war - the “AI commander” problem. James Johnson

From a prioritised list of pre-approved targets, “R2-D2” applies the latest AI-driven “Bugsplat” software to optimise the type of attack and weapons to employ, the timings involved, and any deconfliction considerations such as avoiding friendly fire. “R2-D2” passes this targeting information on to the “Little Buddies,” waiting for the green light to attack. With their targets in sight and weapons selections made, “R2-D2” orders a pair of “Little Buddies” to identify and confuse Chinese air defences using its electronic decoys and AI-driven infrared jammers and dazzlers.

Escalation increases with each passing turn. Beijing views the US B-21’s operations as designed to undermine its sea-based nuclear deterrent in response to “Operation Island Freedom.” Believing it could not risk allowing US forces to frustrate the initial successes of its Taiwanese invasion, China launches a conventional pre-emptive strike against US forces and bases in Japan and Guam. To signal deterrence, China concurrently detonates a high-altitude nuclear explosion off Hawaii’s coast, resulting in an electromagnetic pulse. Time is now of the essence.

The attack is designed to disrupt and disable any unshielded or unprotected electronics on nearby ships or aircraft but not directly damage Hawaii. It was the first use of nuclear weapons in warfare since 1945. Because neither side understood the other’s deterrence signalling, redlines, command decision-making processes, or escalation ladders, they could not communicate that their actions were calibrated, proportional, and intended to force de-escalation.

Psychological insights into human-machine military interactions

How has AI altered our understanding of war (and ourselves)? Three psychological insights relating to human-machine interactions – the dehumanisation of AI-enabled war, human psychology and cognitive bias, and military techno-ethics in digitised warfare – illuminate how AI will influence our capacity to think about modern warfare’s vexing political and ethical dilemmas.

Are machines hollowing out humanity?

An idea fast gaining prominence is that humans will soon become the Achilles heel in the AI-enabled techno-war regime. In other words, intelligent machines will soon no longer need humans acting as autonomous agents. The logical end of this slippery slope is a de facto AI commander, whereby the act of killing – and thus the responsibility attached to agency – is outsourced to machines.

An idea fast gaining prominence is that humans will soon become the Achilles heel in the AI-enabled techno-war regime. In other words, intelligent machines will soon no longer need humans acting as autonomous agents. James Johnson

The topic of whether offloading difficult moral decisions to machines amounts to immorality, or is defendable on the grounds of military expediency, remains an open philosophical, ethical, and political question. Evidence indicates, for example, that drone warfare has not dehumanised warfare in the way people expected. Counterintuitively, rather than treating combat as a video game, human drone pilots often form deep emotional bonds with their targets, and in this war at a distance, many pilots suffer long-term mental health issues akin to traditional combat experience.

Therefore, the argument against deploying AI and autonomous weapons centred on the absence of human emotions oversimplifies both the complexities of human emotion and cognitive states, as well as the psychologically nuanced nature of HMIs, rendering it empirically flawed.

Human psychology and cognitive biases

Human biases can affect human-machine interaction in several ways. They can make decision-makers more predisposed to use capabilities just because they invested time and resources in their acquisition, which may produce false positives about the necessity for war, this is known as the “The Einstellung Effect.”

Moreover, cognitive bias can make decision-makers prone to unreflectively assign positive moral attributes to the latest techno-military Zeitgeist–in this case, AI.

Additionally, humans tend to anthropomorphise machines and thus may view technology as a replacement for vigilant information-seeking, cross-checking and adequate processing supervision.

...drone warfare has not dehumanised warfare in the way people expected... rather than treating combat as a video game, human drone pilots often form deep emotional bonds with their targets, and in this war at a distance, many pilots suffer long-term mental health issues akin to traditional combat experience. James Johnson

Military techno-ethics and moral responsibility of war

Coding ethics into AI-enabled capabilities has emerged as a possible solution to the complex, nuanced, and highly subjective ethical-political dilemmas of war. The quest to imbue AI with a human conscience (or “AI consciousness”) risks defusing the moral responsibility of war to technology, smoothing over – rather than eliminating – moral and ethical tensions between discrimination, responsibility, and accountability for actions and accidents in war.

Even if we accept the (albeit tenuous) argument that intelligent machines are morally and ethically preferable on the battlefield to humans for the reasons described, several problems remain. What should a person do in a type x ethical situation? What moral and ethical codes should we bake into AI?

Looking ahead

Algorithms cannot be merely passive, neutral force multipliers of advanced capabilities. Instead, as human-machine interactions evolve and deepen, they will inevitably inform and shape the psychological mechanisms that make us who we are. AI agents will likely become, either unwittingly or more likely by conscious choice, de facto strategic actors in war.

The quest to imbue AI with a human conscience (or “AI consciousness”) risks defusing the moral responsibility of war to technology, smoothing over - rather than eliminating - moral and ethical tensions between discrimination, responsibility, and accountability for actions and accidents in war. James Johnson

While there is no way to avoid the AI commander problem completely, there are steps that can be taken today to mitigate the risks. Firstly, design AI systems that are reliable and secure. This would involve extensive testing and quality control measures to ensure that the AI system can operate as intended and is not vulnerable to cyber-attacks or other forms of manipulation.

Secondly, limiting the scope of decision-making authority given to AI systems. Confidence-building measures would help here. Rather than relying on AI to make critical strategic decisions on its own, human operators could oversee and approve any decisions made by the AI system.

Thirdly, develop clear protocols and procedures for how AI systems should operate in different scenarios. This could involve setting strict rules around how and when AI systems can make decisions and ensuring that they are trained on a wide range of potential scenarios to minimise the risk of unintended consequences.

Finally, continue to research and develop AI systems with safety and ethics in mind, including building safeguards and fail-safes to prevent unintended consequences and unintended uses of the technology. Ultimately, clearly defined and placed leadership is needed to balance the trade-off between the potential benefits offered by AI in military operations with the need to ensure that its use is safe, ethical, and responsible.

The opinions articulated above represent the views of the author and do not necessarily reflect the position of the European Leadership Network or all of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image credit : Rawpixel / US Government