Atomic Pulse

Deep Fakes and Dead Hands: Artificial Intelligence’s Impact on Strategic Risk

Disinformation, deep fakes, killer robots, and dead hands—all are examples of the dangerous ways artificial intelligence (AI) could lead to war or change the face of war itself. There are a number of scenarios in which artificial intelligence could lead to catastrophic consequences, including the use of nuclear weapons. However, in some instances, the use of artificial intelligence could actually be stabilizing.

NTI has been tracking the potential risks and opportunities that AI poses for strategic stability in four distinct potential areas of application: autonomous nuclear weapons systems; nuclear command, control, and communications systems; cybersecurity; and disinformation. These are some of NTI’s key findings:

  • Before AI is considered for use in nuclear weapons systems, it must be more transparent, have a lower risk of failure, and have better explainability (the ease and accuracy with which a learning model explains its conclusions).
  • Given the implications for strategic stability and increased nuclear risks, a ban on fully autonomous nuclear weapons systems should be considered.
  • AI can provide benefits for intelligence gathering and analysis, including on characteristics of attacks, including size and target, and early warning systems.
  • AI will have both positive and negative effects on the cybersecurity around nuclear weapons and materials, command and control of nuclear arsenals, and other critical nuclear weapons systems.
  • The likelihood of AI-supported disinformation in the public sphere like “deep fakes,” especially around national crises, creates strategic risk.

To delve deeper, here are summaries of NTI’s reports on the issues to date:

Autonomous Nuclear Weapons Systems

Lethal autonomous weapons are weapon systems that can identify, select, and engage a target without meaningful human control. AI could give states with nuclear weapons the ability to create fully autonomous first or second-strike nuclear weapons capabilities.

Given the implications for stability and increased nuclear risks, a ban on fully autonomous nuclear weapons systems should be considered. While some might consider AI-mediated assured retaliation stabilizing, the possibility of unintended launch due to adversarial hacking and/or spoofing of AI-based autonomous systems makes their use unacceptably risky in nuclear weapons systems. Further, it is difficult to justify why a nuclear war would be better fought at machine speed, especially because nuclear weapons are meant primarily as a deterrent.

For more information, read:

Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems 

U.S. Nuclear Modernization: Security & Policy Implications of Integrating Digital Technology 

Nuclear Command, Control, and Communications Systems

One of the most likely potential applications of AI technologies in nuclear systems is improvement in the security and resilience of nuclear command, control, and communication systems. AI can provide continuous assessment of system health during a crisis, make early warning systems faster and better at attack characterization. But a significant risk for AI application is that it could read erroneous information and draw faulty, potentially devastating, conclusions. To avoid decision-making based on incorrect or misunderstood information, the AI systems deployed should be transparent and understandable as well as continuously tested, validated, and verified—options which do not currently exist.

For more information, read:

Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems

Cybersecurity of Nuclear Weapons Systems

As nuclear forces are modernized, AI could be a beneficial tool to improve the cybersecurity of a digital nuclear command, control, and communication system and nuclear weapons systems—although, ironically, any AI system introduces a new vector for a cyberattack, and AI also could be used in the future as an adversarial tool to automate cyberattacks. A cyberattack on the command-and-control structure could erode confidence in deterrence capabilities if the system is not reliable. As a result, governments must aggressively explore technical and policy options to tackle the cybersecurity threat, understanding that it is impossible to have full confidence in the information our digital systems provide.

For more information, read:

U.S. Nuclear Modernization: Security & Policy Implications of Integrating Digital Technology 

Nuclear Weapons in the New Cyber Age 

The Big Hack’s Nuclear Implications: No Confidence in Essential Systems

AI-Supported Disinformation

The possibility of a nuclear scenario, whether a nuclear power plant disaster or a false missile alert as in Hawaii in 2018, should be considered in any plans for social media platforms to improve their crisis response options. Deep fake videos, or artificial media created through machine learning for the purposes of deceiving, planted in a time of crisis could have disastrous consequences in a nuclear scenario. As social media participation increases around the world, the choice is not whether these platforms should be a part of crisis communication—the question is how they can improve.

For more information, read:

Live Tweeting Nuclear War: Social Media, WMD Threats, and Crisis Communication

AI creates exciting opportunities for reduced risk through increased cybersecurity, reliability of command-and-control systems, and better analyzed information that will help prevent and reduce the risk of nuclear crises. At the same time, however, care is needed to avoid applications that could create additional risks that could lead to unintended nuclear use.

Stay Informed

Sign up for our newsletter to get the latest on nuclear and biological threats.

Sign Up



NTI Seminar: Chris Painter on Avoiding Nuclear Escalation from Cyberattacks

Atomic Pulse

NTI Seminar: Chris Painter on Avoiding Nuclear Escalation from Cyberattacks

What happens “if we can’t rely on the information we have,” asks Christopher Painter, former top U.S. cyber diplomat. In an NTI seminar on January 25, Painter posed this critical question and discussed a range of issues at the intersection of cyber and nuclear security.


See All

Close

My Resources