Members of the NEVER network convened on Thursday, the 24th of August, to discuss climate change, climate change governance, and the role of great power competition in aggravating this existential risk, with NEVER coordinator, Edan Simpson, chairing the meeting.
50 results found
Page 3 of 9
In May, the ELN’s New European Voices on Existential Risk (NEVER) network convened for the first time for a discussion of existential risk, how policymakers should react to it, and why diverse voices matter in the existential risk space.
Could establishing Open Source Intelligence (OSINT) systems for nuclear security in the EU improve existential risk resilience?
Nicholas Moulios, from the ELN’s New European Voices on Existential Risk (NEVER) Network, has written the ELN’s first commentary as part of this project. He explores the capabilities of OSINT systems for assessing and mitigating potential nuclear disasters, and how these emerging technologies can be harnessed to improve existential risk resilience in Europe and beyond.
To avoid nuclear instability, a moratorium on integrating AI into nuclear decision-making is urgently needed: The NPT PrepCom can serve as a springboard
The integration of neural networks into NC3 poses a multitude of risks to global security. Alice Saltini writes that to pave the way for a moratorium, NPT State Parties should use the PrepCom to focus discussions on understanding the risks associated with integrating deep learning models into nuclear decision-making.
What do emerging and disruptive technologies like AI mean for the role of humans in war? How might AI-augmented human-machine interaction affect the role of human command in war? And what might an AI commander look like? James Johnson explores these topics and assesses whether or not we are moving towards a situation where AI-enabled autonomous weapons start making strategic decisions, as opposed to humans, during conflict.