Skip to content
Commentary | 17 August 2020

Artificial intelligence and nuclear weapons: Bringer of hope or harbinger of doom?

Image of Jennifer Spindel

Jennifer Spindel |Assistant Professor of Political Science, University of New Hampshire

Emerging technologies Nuclear Security Nuclear Weapons Global Security

In 2017, Russian President Vladimir Putin said whichever country leads in the development of artificial intelligence will be “the ruler of the world.” Artificial intelligence is not unlike electricity: it is a general-purpose enabling technology with multiple applications. Russia hopes to develop an artificial intelligence capable of operations that approximate human brain function. China is working to become the world leader in AI by 2030, and the United States declared in 2019 that it would maintain its world leadership on artificial intelligence. Will the world’s major powers seek to use AI with their nuclear weapons and command and control systems? Pairing nuclear weapons – arguably the previous ruler of the world – with this new technology could give states an even greater edge over potential competitors. But the marriage between nuclear weapons and artificial intelligence carries significant risks, risks that currently outweigh potential benefits. At best, using AI with nuclear weapons systems could increase time efficiencies. At worst, it could undermine the foundations of nuclear deterrence by changing leaders’ incentives to use nuclear weapons.

Opportunities in data analysis and time efficiencies

Artificial intelligence could be a boon for drudgery type tasks such as data analysis. AI could monitor and interpret geospatial or sensor data, and flag changes or anomalies for human review. Applied to the nuclear realm, this use of AI could be used to track reactors, inventories, and nuclear materials movement, among other things. Human experts would thus be free to spend more of their time investigating change, rather than looking at data of the status quo.

Incorporating artificial intelligence into early warning systems could create time efficiencies in nuclear crises. Similar to the boon for data analysis, AI could improve the speed and quality of information processing, giving decision-makers more time to react. Time is the commodity in a nuclear crisis, since nuclear-armed missiles can often reach their target in as little as eight minutes. Widening the window of decision could be key in deescalating a nuclear crisis.

Challenges posed by risks, accidents, and nuclear deterrence

Incorporating artificial intelligence into nuclear systems presents a number of risks. AI systems need data, and lots of it, to learn and to update their world model. Google’s AI brain simulator required 10 million images to teach itself to recognize cats. Data on scenarios involving nuclear weapons are, thankfully, not as bountiful as internet cat videos. However, much of the empirical record on nuclear weapons would teach an AI the wrong lesson. Consider the number of almost-launches and near-accidents that occurred during the Cold War; both U.S. and Soviet early warning systems mistakenly reported nuclear launches. Although simulated data could be used to train an AI, the stakes of getting it wrong in the nuclear realm are much higher than in other domains. It’s also hard to teach an AI to feel the doubts and suspicions that human operators relied on to detect false alarms and to change their minds.

Accidents are also amplified in the nuclear realm. There are already examples of accidents involving automated conventional weapons systems: in March 2003, U.S. Patriot missile batteries shot down a British fighter plane and a U.S. fighter jet while operating in “automated mode,” killing the crews of both planes. Accidents are likely to increase as AI systems become more complex and harder for humans to understand or explain. Accidents like these, which carry high costs, decrease overall trust in automated and AI systems, and will increase fears about what will happen if nuclear weapons systems being to rely on AI.

Beyond accidents and risks, using AI in nuclear weapons systems poses challenges to the foundations of nuclear deterrence. Data collection and analysis conducted by AI systems could enable precision strikes to destroy key command, control, and communication assets for nuclear forces. This would be a significant shift from Cold War nuclear strategy, which avoided this type of counterforce targeting. If states’ can target each other’s nuclear weapons and command infrastructure, then second-strike capabilities will be at risk, ultimately jeopardizing mutually assured destruction. For example, AI could identify a nuclear submarine on patrol in the ocean, or could interfere with nuclear command and control, thus jeopardizing one, or more, legs of the nuclear triad. This creates pressure for leaders to use their nuclear weapons now, rather than risk losing them (or control over them) in the future.

Even if states somehow agree not to use AI for counterforce purposes, the possibility that it could one day be used that way is destabilizing. States need a way to credibly signal how they will – and won’t – use artificial intelligence in their nuclear systems.

The future of AI and nuclear stability

The opportunities and risks posed by the development of artificial intelligence is less about the technology and more about how we decide to make use of it. As the Stockholm International Peace Research Institute noted, “geopolitical tensions, lack of communication and inadequate signalling of intentions” all might matter more than AI technology during a crisis or conflict. Steps to manage and understand the risks and benefits posed by artificial intelligence should include confidence-building measures (CBMs) and stakeholder dialogue.

CBMs are crucial because they reduce mistrust and misunderstanding, and can help actors signal both their resolve and their restraint. As with conventional weapons, transparency about when and how a state plans to use artificial intelligence systems is one type of CBM. Lines of communication, which are particularly useful in crisis environments, are another type that should be explored.

Continued dialogue with stakeholders including governments, corporations, and civil society will be key to developing and spreading norms about the uses of artificial intelligence. Existing workshops and dialogues about the militarization of artificial intelligence, and artificial intelligence and international security show that such dialogues are possible and productive. The international community can consider building on existing cooperative efforts concerning cyberspace, such as the U.N.’s work on norms and behaviour in cyberspace, the Cybersecurity Tech Accords, and Microsoft, Hewlett, and Mastercard’s CyberPeace Institute. This dialogue will help us understand the scope of potential change and should give us incentives to move slowly and to push for greater transparency to reduce misperception and misunderstanding.

The opinions articulated above represent the views of the author(s) and do not necessarily reflect the position of the European Leadership Network or any of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image: Wikimedia commons