Skip to content
Commentary | 19 March 2024

Navigating cyber vulnerabilities in AI-enabled military systems

The rapid advancements in artificial intelligence (AI) are showcasing astonishing capabilities with the potential to revolutionise various sectors of society, including the military. As AI technology evolves, its integration into military operations is increasingly viewed as a strategic priority by countries eager to enhance decision-making processes and operational efficiency. However, despite their potential, current AI systems suffer from critical shortcomings: their robustness and reliability are not yet sufficiently advanced to guarantee dependable performance in the high-stakes domain of military operations.

Among other risks, AI-enabled systems are highly vulnerable to cyber attacks in ways that traditional military platforms are not, providing new entry points for hackers to access and manipulate sensitive military data or disrupt military operations. Defensive measures against such cyber threats are lagging behind, leaving systems open for adversaries to exploit vulnerabilities in military systems.

AI-enabled systems are highly vulnerable to cyber attacks in ways that traditional military platforms are not, providing new entry points for hackers to access and manipulate sensitive military data or disrupt military operations. Alice Saltini

As countries continue incorporating AI into conventional military systems, they should prepare themselves for the risk that adversaries will – and likely already are – working to exploit weaknesses in AI models by threatening datasets at the core of AI and developing novel exploits. Any integration of AI-enabled platforms must be approached with the understanding that they are highly susceptible to failures, including new entry points for adversaries to manipulate. Moreover, integrating AI on a narrow scope across various conventional military systems – especially those connected, even indirectly, to nuclear decision-making—could have unforeseeable repercussions in the nuclear arena. Therefore, states should pursue a risk-based strategy with the development of metrics to assess how vulnerabilities would impact the area of AI integration.

Cyber vulnerabilities emanating from current AI systems

The vulnerabilities present in current AI systems are opening doors for hackers to undermine data integrity, compromise confidentiality, and disrupt availability, leading to erroneous outcomes, data breaches, and system failures.

Integrity attacks, the most prevalent form of cyber attacks, aim to deceive AI systems into making erroneous decisions. In the case of data poisoning, attackers manipulate the training data, causing the AI to learn incorrect patterns. In military platforms, these manipulations could lead to a range of scenarios, from failure to identify correct targets to catastrophic failures such as misidentifying friendly forces as hostile. Evasion techniques – another type of integrity attack – involve exploiting imperfections in the model, causing false identification in detection systems from even a single tampered data point. An example could be modifying drone imagery data to disguise an adversary’s mobile missile launcher.

Through confidentiality attacks, hackers infer protected information about the system’s operation or its training data. These attacks can lead to significant security breaches due to the revealing of classified or sensitive data that some military models are trained on. With greater knowledge of the underlying model, further exploits are likely to be discovered, potentially including methods for fooling detection capabilities.

Finally, availability attacks, including denial-of-service (DoS) and ransomware, aim to cripple the availability of critical systems. In a military context, this could mean disrupting the AI systems that manage logistics and supply chains, leading to shortages of supplies at critical times. These methods of attack are not unique to AI systems, though they still present a threat.

Cyber attacks are a concerning trend due to their ease of execution, reliance on widespread AI vulnerabilities, and the challenges in defending against such attacks. Executing cyber attacks often requires less expertise and resources than those needed for designing and training these systems. This imbalance is exacerbated by the fact that AI resilience against failures frequently necessitates compromises in performance. Such trade-offs might, therefore, address specific vulnerabilities but, at the same time, may inadvertently amplify others, offering attackers opportunities to exploit new weaknesses.

Implications for global stability

As countries ramp up their deployment of AI technologies in conventional military systems, it is beyond a doubt that adversaries are likely to seek to identify and exploit vulnerabilities in these systems, especially in pre-conflict scenarios. The implications for Western defence systems are massive, given the active engagement of adversaries like Russia and China in cyber operations. These countries have committed substantial resources to cyber warfare.

China, for instance, likely views offensive and defensive capabilities as a means for information advantage and is identified by the US Department of Defense as a growing attack threat to military and critical infrastructure, highlighting the strategic use of cyber espionage to undermine Western capabilities. Similarly, Russia has leveraged cyber operations to exert control over its population and to influence the political landscapes of adversary states, as evidenced by its interference in the 2016 US presidential elections. These actions demonstrate Russia’s capacity to employ cyber tactics to destabilise other nations.

The threat landscape is further worsened by the possibility of cyber attacks coming from non-state actors, as well as states with significantly smaller resources that might be motivated to disrupt Western defence systems. Given the relatively low barriers to executing AI system breaches, which do not necessarily demand extensive resources or expertise, there exists a tangible risk that non-state actors could compromise military operations.

Countries with limited resources, such as North Korea, are also active in this arena. North Korea’s cyber attacks are primarily focused on espionage and financial crimes to support its military capabilities and circumvent sanctions. This demonstrates a strategic use of cyber operations as leverage for financial gain and projecting power. North Korea is in the process of pursuing an AI program with potential military applications and has already used AI to aid in cyber offensive operations, this is despite sanctions and resource limitations which pose significant hurdles to the development of a robust military AI program in the near term.

Cyber attacks are likely to be an attractive, cost-effective alternative for both state and non-state actors to achieve an asymmetrical advantage and to challenge more technologically advanced adversaries. Alice Saltini

Cyber attacks are likely to be an attractive, cost-effective alternative for both state and non-state actors to achieve an asymmetrical advantage and to challenge more technologically advanced adversaries. It is likely that adversaries of Western nations may already be working towards the goal of undermining military AI platforms by exploiting cyber vulnerabilities.

AI and escalation pathways

When it comes to the nuclear domain, AI vulnerabilities demonstrate the precariousness of relying on this technology in areas where security is paramount, such as nuclear command, control, and communications (NC3) systems. But even as nuclear-weapon states are hesitant to integrate AI into critical functions of NC3, widespread AI adoption in conventional military platforms could still have unexpected downstream effects on nuclear risks.

The aggregate effects of AI integration in conventional military systems or intelligence platforms could lead to unpredictable effects in the nuclear domain. Moreover, adversarial interference through cyber attacks could lead to large-scale deception, which, in turn, can cause widespread miscalculations and misinterpretations. For example, if AI-enabled intelligence and surveillance systems that feed into NC3 are compromised, the integrity of the information being processed and relayed is at stake. This could lead to a false perception of an imminent threat or misunderstanding of an adversary’s actions, potentially triggering an unintended or escalatory response. In a geopolitically unstable environment, such miscalculations could heighten the risks of inadvertent or accidental escalation.

Moreover, deceiving elements that indirectly affect NC3 could confer advantages to adversaries utilising cyber offensive capabilities, tempting one side to consider pre-emptive strikes as a viable strategy to counteract or mitigate perceived threats.

Additionally, highly networked military systems, if exploited by adversaries, could lead to catastrophic cascading failures. These failures undermine conventional deterrence capabilities, which, in extreme cases, might corner a state into considering a limited nuclear response as a last resort to restore deterrence.

Conclusions

In light of these considerations, the Western defence apparatus must adapt to a security environment where cyber threats are not only ubiquitous but are also evolving with the integration of AI technologies. The incorporation of these technologies must occur with the greatest level of caution, particularly as cyber threats become more sophisticated and pervasive.

Western defence apparatus must adapt to a security environment where cyber threats are not only ubiquitous but are also evolving with the integration of AI technologies. Alice Saltini

To address this, it’s crucial for these states to establish clear guidelines for military applications of AI by developing metrics based on cyber risks. These metrics should assess how cyber vulnerabilities could impact the AI integration area in military systems, emphasising the necessity of human oversight and the ability to revert to manual control should anomalies arise. At the same time, there should be a concerted effort to bolster cyber defences through targeted research to defend against such attacks.

It should go without saying that the integration of AI in critical NC3 functions should not be pursued due to the high stakes involved. The inherent risks and the potential for catastrophic consequences of compromised nuclear deterrence systems necessitate a conservative approach due to the current unreliability of AI technologies.

The opinions articulated above represent the views of the author(s) and do not necessarily reflect the position of the European Leadership Network or any of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image: Composite image. Sources: Pixaby and Defense Visual Information Distribution Service