To prevent a further erosion of the effectiveness and efficiency of the current review cycle of the NPT, there is an urgent need for action. Michael Biontino looks at what pragmatic proposals could be taken up from the 2023 PrepCom Chair’s recommendations and the working paper from the Chair of the working group on further strengthening the review process of the NPT.
To better understand emerging technologies, NEVER members Konrad, Anemone, Emil, Arthur, and Joel outline the evolution of the risk landscape around emerging disruptive technologies and draw parallels between the dangers posed by nuclear weapons and those posed by novel biotechnologies. They explore the broader challenge of governing emerging technologies and suggest potential ways forward.
Listen to the first episode of the NEVER podcast – Ok, Doomer!
In this episode, we explore the basics of man-made existential risk, featuring an introduction to the topic, its relationship to great power competition, how governments have dealt with potential existential risks such as the Cuban Missile Crisis, and how they should respond to them in future.
Héloïse Fayet analyses the French literature on France’s perception of military AI, especially its consequences on strategic systems and competition, and nuclear deterrence. Fayet offers practical recommendations for France both domestically and internationally.
Alice Saltini analyses the British literature on the UK’s perception of military and nuclear applications of AI and their impact on strategic stability and NC3. The paper offers recommendations for unilateral measures that the UK can take, as well as multilateral initiatives within the P5 framework, to address the risks associated with AI in nuclear decision-making
Oleg Shakirov analyses Russian-language literature on the Russian debate on AI and the nuclear field and offers recommendations for P5 states to advance dialogue on AI integration into nuclear C2, force structure and decision-making.