This paper is one of four bibliographies commissioned by the ELN on Chinese, French, Russian, and British perspectives on AI integration in nuclear decision-making, from a range of nongovernmental experts. It is part of the ELN’s project “Examining the impact of artificial intelligence on strategic stability: European and P5 perspectives”.
This paper looks at the Russian debate on AI and the nuclear field. The bibliography covers two segments of literature. The sections on auxiliary functions of managing nuclear forces, early warning, and command of nuclear forces are primarily drawn from Russian military journals, whereas the section on strategic stability and arms control is primarily drawn from international relations journals. These two segments represent respective expert communities that only occasionally overlap. Whereas military authors tend to focus on military and military-technical aspects, international relations authors look at the same problems through diplomatic and political lenses. Where appropriate, scholars’ views are supplemented with practitioners’ and policymakers’ remarks.
To advance dialogue on AI integration into nuclear C2, force structure and decision-making among P5, the paper calls for states to consider the following recommendations:
- Glossary of AI-related terms – There seems to be lack of common language, which makes not only international – but also internal – discussions on this topic complicated. States could start addressing this issue by compiling a glossary of shared terms that would contribute to mutual understanding. One possible option is to expand the already existing ‘P5 glossary of key nuclear terms’ with a new section of AI-related terms or, more broadly, terms related to security of command and control.
- ‘Fear-mapping’ – Many debates on AI, especially in connection with nuclear weapons and warfare in general, are infected with fears of the worst possible scenarios (e.g. that an error in, or an attack against, AI-enabled system could trigger a nuclear war). States could address this issue by mapping these fears as they relate to nuclear C2 and decision-making. This would require brainstorming all possible fears and concerns, then dissecting them and analysing how anticipated dangers can be avoided.
- Feasibility of non-interference – In bilateral arms control between Soviet Union/Russia and the United States, there is a long history of non-interference with national technical means. In the recent decade, experts considered whether the non-interference commitment could be expanded to explicitly address cyber attacks; to cover non-military assets in space; and to include more states. States should discuss whether the idea of non-interference could be applied to artificial intelligence used in the nuclear enterprise, for instance what types of targets should be off-limits to cyber attacks.
- Dependencies between auxiliary functions and critical systems – While the use of AI in command and control is probably the primary concern of states, this is far from the only way that the nuclear enterprise may be transformed by these technologies. States should explore to what extent integration of AI technologies into auxiliary systems may have impact on functions critical to nuclear C2 and ways to mitigate possible risks.
- AI risk assessment and audit from other fields – States should learn from the practices adopted in other field, where risk assessment and AI safety audits are more mature. In particular, they should survey ways to establish confidence in the process, its transparency and interpretability. This can create space for general discussion about how one tests and evaluates AI systems and to what extent lessons learned elsewhere could inform thinking on nuclear decision-making.
- Stabilising uses of AI – While the debate about strategic stability leans toward perceiving AI technologies as a destabilising factor, States should explore ways in which the use of AI could have a stabilising effect on relations between them.
- Impact of conventional AI-enabled weapons on nuclear forces and C2 – One of the concerns raised in Russian literature pertains that AI-enabled drones could be used to target nuclear forces or C2 infrastructure. States should analyse to what extent this can be an additional destabilising factor and how it can be addressed.
- Regular exchanges – States should agree to meet regularly and discuss issues related to AI interaction with nuclear C2 and decision-making, moreover they should attempt to include AI practitioners into such exchanges to provide for more substantive dialogue.
The opinions articulated above represent the views of the author and do not necessarily reflect the position of the European Leadership Network or all of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.
Image: Pixaby composite