Skip to content

Filter

56 results found
Page 1 of 10
Commentary

The potential terrorist use of large language models for chemical and biological terrorism

In our latest New European Voices on Existential Risk (NEVER) commentary, Nicolò Miotto explores the potential existential risks stemming from the terrorist use of large language models (LLMs) and AI to manufacture chemical, biological, radiological and nuclear (CBRN) weapons. In the commentary he explores how LLMs and AI have enabled terrorist groups to enhance their capabilities so far, and what governments, the private sector, and NGOs need to do to mitigate future risks.

5 April 2024 | Nicolò Miotto
Commentary

Ok, Doomer! The NEVER Podcast – Climate change: A hot topic

Listen to the third episode of the NEVER podcast – Ok, Doomer! In this episode, we explore climate change; the existential risk that the general public is most familiar with. Featuring an exploration of how the climate crisis affects politics on the local, national and international levels, climate change as a “polycrisis”, and how the world in the past has managed to unite around policies that combat climate change such as closing the hole in the ozone layer.

Commentary

Navigating cyber vulnerabilities in AI-enabled military systems

As countries continue incorporating AI into conventional military systems, they should prepare themselves for the risk that adversaries are likely already working to exploit weaknesses in AI models by threatening datasets at the core of AI. To address this, Alice Saltini writes that states should develop metrics to assess how cyber vulnerabilities could impact AI integration.

19 March 2024 | Alice Saltini
Commentary

Sounding the alarm on AI-enhanced bioweapons

In our latest commentary produced from our New European Voices on Existential Risk (NEVER) network, Rebecca Donaldson explores the potential of new technologies for security whilst minimising their potential for harm in the realms of AI and the life sciences. She proposes that more funds go towards the biological weapons convention, the creation of an Emerging Technology Utilisation and Response Unit (ETURU) and the fostering of a culture of AI assurance and responsible democratisation of biotechnologies.

26 February 2024 | Rebecca Donaldson
Commentary

What does global military AI governance need?

In the absence of globally acknowledged governance frameworks for AI in the military domain, two new initiatives came into existence in 2023 – the Responsible Artificial Intelligence in the Military Domain (REAIM) and the US-initiated Political Declaration. Mahmoud Javadi and Michal Onderco analyse both, writing that REAIM provides a much-needed space for a democratic, depoliticised, and decentralised approach to global military AI governance.

2 February 2024 | Mahmoud Javadi and Michal Onderco