
Jill Hruby
Sam Nunn Distinguished Fellow
“At a time when most countries with nuclear weapons are modernizing or diversifying their nuclear arsenals, significant technological advances in artificial intelligence (AI) for military applications suggest that AI inevitably will be explored for use in nuclear-weapon systems. Along with significant benefits, however, come associated risks and implications for strategic stability.
Two application areas are considered the most likely to take advantage of AI advances in the near term to mid term: Nuclear Command, Control, and Communications (NC3) and autonomous nuclear-weapon systems. This paper envisions the specific functions AI could perform in these two areas and analyzes the potential positive and negative consequences.
In NC3, AI could be applied to enhance reliable communication and early warning systems, to supplement decision support, or to enable automated retaliatory launch. The implications vary dramatically. Enhancing communication reliability and decision-support tools with AI has recognizable benefits, is relatively low risk, and is likely stabilizing, although it still requires additional technical research to lower risk as well as deeper policy exploration of stability implications to avoid provoking an arms race. AI application to automated retaliatory launch, however, is highly risky and should be avoided.
For autonomous nuclear-weapon systems, AI along with sensors and other technologies are required for sophisticated capabilities, such as obstacle detection and maneuverability, automated target identification, and longer-range and loitering capability. Today’s technology and algorithms face challenges in reliably identifying objects, responding in real time, planning and controlling routes in the absence of GPS, and defending against cyberattacks. Given the lack of technology maturity, fully autonomous nuclear-weapon systems are highly risky. These risks, combined with the potential instability these weapons may cause, suggest that a ban on fully autonomous systems is warranted until the technology is better understood and proven.
For each state with nuclear weapons, the specific application and timing of AI incorporation will depend on the production or modernization schedule, the perceived benefits and needs, the technical capabilities and level of investment, and the level of risk acceptance. To encourage safe application and help minimize risks and negative effects on strategic stability as AI is introduced into nuclear-weapon systems over time, the following is recommended:
Sign up for our newsletter to get the latest on nuclear and biological threats.
With the Center for Advanced Defense Studies, NTI demonstrates the viability of using publicly available information and machine learning to detect nuclear proliferation.
This paper highlights the need for renewed attention to the catastrophic effects of nuclear conflict as a crucial step toward reducing the risk of nuclear use.
There is a critical need for a global diplomatic approach to address growing cyber risks, including, where possible, through cooperation between the United States and Russia.