Skip to content
Commentary | 3 November 2021

Deep fakes: The next digital weapon with worrying implications for nuclear policy

The past decade has witnessed the unprecedented march of technology and the opportunities, dangers, and disruptions that accompany it. In the last 4-5 years, a synthetic media technology (that uses machine learning techniques and is created by generative adversarial networks – GANs) known as deep fakes, has revolutionised the ways that digital media can be altered. The ability of state and non-state actors to generate, forge, and manipulate media has created clickbait headlines and fake news, ‘terrorised women’ by substituting faces to create fake porn, and abetted the spread of misinformation and disinformation. An opinion piece in the Washington Post has called this worrying trend of mass-scale manipulation the “democratisation of forgery”.

In the last 4-5 years, a synthetic media technology known as deep fakes, has revolutionised the ways that digital media can be altered. Sylvia Mishra

The opportunities and dangers offered by deep fakes are manifold. In the future, societies will possibly benefit from this technology- for example, in the realm of education, healthcare, the arts and criminal forensics, however, deep fake technology has far greater potential to disrupt the ‘normal’. One of the disquieting ramifications of this emerging and disruptive technology (EDT) is the challenge it poses to nuclear weapons decision-making, in particular its impact on decision-makers and wider society, Nuclear Command, Control, and Communications (NC3), nuclear doctrine, posturing, and signalling.

Implications for nuclear weapons decision-making

In the 21st century, nuclear weapons decision-making is markedly different from that of the Cold War era. As great power competition has come back into sharper focus, countries are expanding and upgrading their nuclear arsenals and moving towards incorporating EDTs for warfighting. On the one hand, the political divide between nuclear haves and have-nots is widening and, on the other, the pursuit of EDTs by non-nuclear states is reducing the technology gap between nuclear and non-nuclear states. Simultaneously, arms control is waning. These developments are taking place at a time when trust among states and decision-makers is fast eroding, and generational divides among decision-makers are increasing. For example, senior decision-makers may find themselves lacking knowledge about new EDTs and technical know-how, while younger decision-makers might lack the understanding of nuclear policy-making compressed timelines.

The ability of deep fakes to undermine the confidence in information analysis and outputs provided by digital security platforms can erode trust among states and, in turn, complicate nuclear weapons decision-making, making it difficult for decision-makers to make distinctions between correct and spurious information. Deep fakes expert and computer science professor at Dartmouth University, Hany Farid stated, “The things that keep me up at night these days are the ability to create a fake video and audio of a world leader saying I’ve launched nuclear weapons”. He adds that the technology to do this exists today.

The ability of deep fakes to undermine the confidence in information analysis and outputs provided by digital security platforms can erode trust among states and, in turn, complicate nuclear weapons decision-making

As we witness rapid advancements of deep fake technology, nuclear weapons policy decision-makers are likely to be faced with questions like “will deep fakes undermine understanding about enemy intent and misdirect about an adversary capability?” Furthermore, deep fakes may cause algorithms that offer information on situational awareness to misclassify based on altered inputs. Such scenarios may cause a breakdown in automated NC3 architecture bearing serious consequences. With the corruption and poisoning of data, can adversaries take undue advantage and engage in nuclear brinkmanship? Can non-state actors create misperception and escalation by generating fake videos of a leader suggesting that they have deployed nuclear weapons against an adversary? Even if such fake videos can be quickly detected, it is highly likely that once these videos go online they will sow the seeds of widespread uncertainty.

During crises, the general population might find it difficult to tell factual from spurious information, exacerbating the situation. In February 2019, India and Pakistan engaged in a conflict under the nuclear shadow as a Pakistan-based terror network Jaish-e-Mohammad (JeM) conducted a terror attack that killed more than 40 Central Reserve Police Force (CRPF) in Pulwama district, Kashmir in India. The Indian government responded with airstrikes targeting JeM’s terror camps and training facilities across the Line of Control (LoC) and both countries mobilised their forces, engaged in cross-border firing and shelling along the LoC, and moved tanks to the frontlines. During the crisis, the conflict was escalated by social media, as leaders in both countries took to open social media platforms like Twitter in order to rally the masses and mobilize public support – both domestic and international. The Pulwama-Balakot crisis revealed that the use of social media during a crisis thickened the fog of war as leaders felt compelled to manage domestic public opinions and expectations. With the combination of social media’s reach and the increasing ability of state and non-state actors to manipulate it, social media has the potential to cause real-world harm and impact the outcome of a crisis. With the pace and velocity of war increasing and decision-making timelines shortening, deep fakes can play a facilitating role in lowering nuclear use thresholds.

In a report titled ‘Weapons of Mass Distortion’, King’s College London’s Marina Favaro classifies deep fake as a ‘weapon of mass distortion’, arguing that it is capable of reducing situational awareness of a country and could erode Nuclear Command, Control, and Communications (NC3). Targeting NC3, which supports the very foundations of nuclear deterrence and policy-making, can have a catastrophic effect. As deep fakes advance in sophistication, nuclear weapon decision-makers will find it increasingly difficult to trust machine-generated information. The lack of trust in the information received could put decision-makers at a disadvantage during a crisis, both in making decisions quickly and making decisions based on factual information. Furthermore, the asymmetries in understanding the authenticity of information among state actors, domestic and international audiences may also create mistrust and uncertainties that could distort and influence the context in which decisions are being made.

In a recent exercise at an ELN workshop on nuclear weapons decision-making under technological complexity, former high-level decision-makers elaborated on the dangers of the deliberate use of deep fake technology. They discussed how it could compound difficulties in identifying key facts under time constraints and its effect on a decision-maker’s ability to ‘process and assimilate’ and thus make a decision. Other implications of the introduction of deep fakes into classified data feeds is that they could severely undermine decision-makers’ ability to factually assess a situation and plan.

As more countries invest in counterforce technologies, deep fake technology could be utilised by states and non-state actors to pursue a predetermined escalatory path or create situations that necessitate a first-strike attack. The deliberate pursuit of deep fakes to gain asymmetry advantage in conflict can also significantly impact countries’ nuclear doctrines’, posturing, and signalling. As deep fake technology matures, it is likely to be salient in military information operations and can also create compulsions of a counterattack based on lies and fabrications. Countries might feel compelled to resort to non-nuclear preemptive strikes, leading to crisis-escalation amid the challenges of attribution and verification. As verification of the authenticity of audios and videos is a challenge, leaders will probably have to take actions based on “limited information”  in the face of a lack of tools or time to distinguish between reliable vs spurious information. With the help of deep fakes, adversaries could also engage in blackmailing, for instance by creating compromising videos using deep fake technology of elected officials or individuals with access to classified information, to use as leverage.

While there are many potential benefits to deep fake technology, the associated dangers and risks it poses when utilised for nefarious purposes requires urgent attention. Sylvia Mishra

A recent Forbes article argued why deep fakes are a net positive for humanity, offering examples of their ability to create fake brain MRI scans for medical purposes. Another article showcased how with machine learning deep fake technology, a museum in Florida can recreate life-size versions of surrealist painter Salvador Dali telling stories about his life. While there are many potential benefits to deep fake technology, the associated dangers and risks it poses when utilised for nefarious purposes requires urgent attention. Deep fakes are going to create, facilitate and abet chaos in conflict, lower nuclear thresholds, and complicate nuclear weapons decision-making. It is important that the nuclear weapons policy community is cognizant of the challenges posed by deep fakes and respond to this technology’s uncontrolled use and spread through focused research studies and awareness-building exercises. Soon, it might become pertinent to push for norms of use and legislation to regulate its use, especially during a crisis.

The opinions articulated above represent the views of the author(s) and do not necessarily reflect the position of the European Leadership Network or any of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image: Screengrab from “New technique for detecting deepfake videos“, UC Berkeley, Creative Commons