Atomic Pulse

Dr. James Johnson on How AI is Transforming Nuclear Deterrence

Will artificial intelligence (AI) make accidental nuclear war more likely? Will it escalate tensions between nuclear-armed states? Will it increase or decrease the chances of nuclear deterrence breaking down? Are policymakers prepared to understand and respond to the security risks that AI is already introducing?

Dr. James Johnson, a lecturer in strategic studies in the Department of Politics and International Relations at the University of Aberdeen, dives into these important questions about the consequences of AI technology for the nuclear age in his new book, AI and the Bomb. The book applies tried-and-tested cornerstones of Cold War-era nuclear theorizing—deterrence, strategic stability, the security dilemma, inadvertent escalation, and catalytic nuclear war—to unpack the challenges that arise with high-speed AI technology and examine AI’s potential impact on the nuclear domain.

On October 11, NTI hosted Johnson for a virtual seminar moderated by Senior Advisor to the NTI President Douglas Shaw to hear more about his findings and what governments and experts should do in response

The Risk of AI and Nuclear Weapons

During the event, Johnson described the effect that AI technology will have on nuclear deterrence and catalytic nuclear war—a major theme of his book. He began with an overview of how advances in AI may allow adversaries to target nuclear assets; attack nuclear command, control, and communications systems with AI-cyber weapons; and use drones in swarms to strike military assets. In addition, Johnson said, optimized AI algorithms could misinterpret an adversary’s signaling and complicate decision-making around whether to escalate a potential nuclear crisis or stand “back from the brink.”

Further, as AI technology is increasingly used, the growth in human-machine interactions is likely to increase the risk of escalation. Regardless of whether a person is  “in the loop,” the AI tools will influence every stage of decision-making, ultimately affecting the human-machine dynamic.

“Not only would AIs need to understand the goals and intentions of human commanders, but also the behavior of their human and potentially machine adversaries,” Johnson said. “This challenge for sure is further compounded by the inevitable differences in adversaries’ use of force philosophy.” Adding to the complexity and uncertainty: Would commanders become too reliant on AI? Would commanders distrust AI’s recommendations? How might an adversary calculate risks differently with non-human agents on the other side?

Johnson revisited Cold War-era theory to consider how AI tools in the hands of third-party actors could drag nuclear adversaries into conflict—or even trigger nuclear war. For example, non-state actors could use AI-enhanced cyber tactics to manipulate information and spread conspiracy theories, or damage command, control, and communications systems, early warning satellites, and radars.

Recommendations 

How do we confront the risks that AI technology pose to nuclear escalation? Johnson highlighted four recommendations:

  • Enhance the safety of nuclear weapons by increasing safeguards and risk assessments against cyberattacks
  • Implement authentication codes to improve command and control protocols and mechanisms to decrease the likelihood of an inadvertent nuclear launch
  • Employ robust safeguards to contain the consequences of errors and accidents
  • Develop bilateral and multilateral confidence-building measures in the forms of increased strategic dialogue and potential arms control agreements to manage AI

After pondering the doom-and-gloom nature of AI and nuclear deterrence, Johnson ended with a note of optimism: “AI can be utilized as a solution to enforce or mitigate these risks.” Rather than simply implementing regulations that dampen the benefits of AI, Johnson argued, it is important to focus on the benefits of AI, such as its ability to improve safety, command and control, and response times, as well as to reduce the risk of human error. As AI technology continues to evolve, he said, it is imperative that theories of nuclear deterrence and strategic stability evolve alongside it.

Stay Informed

Sign up for our newsletter to get the latest on nuclear and biological threats.

Sign Up

More on Atomic Pulse

Diverse Voices in International Security: Tiffany Blanchard-Case on Mentorship, Collaboration, and Working to Expand Opportunities in the Nuclear Field

Atomic Pulse

Diverse Voices in International Security: Tiffany Blanchard-Case on Mentorship, Collaboration, and Working to Expand Opportunities in the Nuclear Field

For Black History Month, Jupiter Huang, NTI’s communications intern, had the opportunity to ask Tiffany Blanchard-Case, director of the Office of Nuclear Material Removal and Elimination at the National Nuclear Security Administration (NNSA), about the importance of mentorship and her advice for young people interested in working on nuclear issues.




See All

Close

My Resources