Héloïse Fayet analyses the French literature on France’s perception of military AI, especially its consequences on strategic systems and competition, and nuclear deterrence. Fayet offers practical recommendations for France both domestically and internationally.
Alice Saltini analyses the British literature on the UK’s perception of military and nuclear applications of AI and their impact on strategic stability and NC3. The paper offers recommendations for unilateral measures that the UK can take, as well as multilateral initiatives within the P5 framework, to address the risks associated with AI in nuclear decision-making
Oleg Shakirov analyses Russian-language literature on the Russian debate on AI and the nuclear field and offers recommendations for P5 states to advance dialogue on AI integration into nuclear C2, force structure and decision-making.
Fei Su and Jingdong Yuan analyse Chinese-language literature to present Chinese perspectives on AI and its military applications. The paper offers recommendations to mitigate the risks associated with the military use of AI in nuclear C2 systems, particularly focusing on the steps that China could consider to enhance its practices.
The nuclear-weapons states China, France, Russia, the United Kingdom, and the United States are increasingly recognising the implications of integrating AI into nuclear weapons command, control, and communication systems. Exploring the risks inherent to today’s advanced AI systems, this report sheds light on characteristics and risks across different branches of this technology and establishes the basis for a general purpose risk assessment framework.
In our latest commentary from the ELN’s New European Voices on Existential Risk (NEVER) network, Shane Ward and Eva Siegmann, explore how the emergence of non-nuclear strategic threats (NNST) has undermined the normative taboo surrounding the use of nuclear weapons, and why new methods extending beyond deterrence are needed to ensure international stability.