Atomic Pulse

Eric Schmidt on Global Security in the Age of Artificial Intelligence

Sentient, human-like robots that control the world may be products of our imagination, but recent rapid advances in artificial intelligence (AI) are already starting to reshape the contours of global security. From large language models (LLMs) to autonomous weapons, the potential range of AI applications is as vast as it is transformative. As so often in history, humans face technological changes that could uproot their understanding of knowledge, reality, and security. How do we harness AI’s immense potential without exacerbating risks? How does AI impact the nature of warfare? And how will it shape the future of human decision-making?

NTI Co-Chair and CEO Ernest J. Moniz recently hosted Eric Schmidt, former CEO and chairman of Google, for the inaugural NTI Innovation Forum. In a thought-provoking conversation, Schmidt shared his insights about the capabilities and limits of AI and the challenges the technology poses to nuclear, biological, and cyber security, as well as the dilemmas it creates for governance and decision-making. Referencing the book he authored with Dr. Henry Kissinger and Daniel Huttenlocher titled The Age of AI: And Our Human Future, Schmidt said. “The arrival of intelligence that is analogous to humans is a very big deal in human history. How do we live with [it]?”

Schmidt expressed limited concern about AI’s integration into current nuclear weapons systems due to their ongoing reliance on outdated, less hackable “floppy disks”—a trait that will, however, change in the coming years as the U.S. government continues its nuclear modernization efforts. However, he did express alarm that “we do not have a theory of deterrence going forward.” In the age of AI, the concepts of nuclear deterrence and mutually assured destruction are “untested”.

Schmidt also warned about “Dr. Strangelove scenarios” that could evolve with AI systems in charge of making critical decisions about the use of a nuclear weapon. He stressed that resting ultimate decision-making authority with human operators in high-stakes, rapid-response situations (in other words, keeping the “human in the loop”) would present a vital safeguard—for nuclear weapons, drones, and other autonomous weapons alike.

“Compression of time puts meaningful human control at risk,” Schmidt cautioned. He added: “The systems are very powerful today, but they make mistakes.”

Asked about the increasing complexity of cyber risks for nuclear and other critical systems, Schmidt painted AI as a double-edged sword. Because AI—without guardrails—could be tasked with identifying and exploiting vulnerabilities in a targeted manner and continuously attacking until successful, detecting the pattern of such attacks will require the deployment of robust AI-enabled defense systems. In addition, Schmidt said, there would need to be a way to entirely disconnect from the internet. Effective cybersecurity might therefore entail letting AI fight AI.

Schmidt said he was “quite convinced that we will have a moment in the next decade where we will see the possibility of extreme risk events,” and that “we’re building the tools that will accelerate the dangers that are already present.” The extraordinary power of AI systems, coupled with our limited understanding of their full knowledge and capabilities, presents inherent risks, especially in situations when those systems learn new skills and aptitudes that were not explicitly taught or anticipated by their developers. While a rise in available open-source LLMs is fueling innovation, Schmidt expressed concern about their misuse by malicious actors who could exploit those models to develop harmful applications such as the synthesis of deadly pathogens, including viruses. “The dispersion of these tools is so fast, it’s going to happen from some corner that we are not expecting,” he warned.

In addition to the need for human control, Schmidt made a case for strong guardrails and robust monitoring and regulatory frameworks to mitigate threats, including large-scale “recipe-based” attacks. President Biden’s AI Executive Order, the UK’s AI Principles, and the EU’s AI Act are recent starting points. Schmidt—who was chairman of the U.S. National Security Commission for Artificial Intelligence—said he envisions a comprehensive governance structure for AI that includes, AI-powered threat detection and response and AI evaluation companies, as well as agreements and treaties. He suggested starting with a “no-surprise” treaty: “If you’re going to test something, don’t do it in secret, because that in and of itself could be detected and trigger a reaction.”

Overall, however, humanity would have to build a “human trust framework,” he said. “This is going to be extremely difficult.”

The good news is that technology is not inherently bad. As Schmidt writes in his book, the dual-use nature of AI means that “AI will open unprecedented vistas of knowledge and understanding.” Other technologies, like quantum computers and sensing systems, could help detect nuclear materials. Today, computer science is the number one major in all leading U.S. universities, creating a strong talent pipeline that, with the right incentives, can help with solving the problems and mitigating the risks the world inevitably will face in the coming decades.

“It’s the beginning of a very long journey,” Schmidt concluded.

Stay Informed

Sign up for our newsletter to get the latest on nuclear and biological threats.

Sign Up

More on Atomic Pulse


Blundering into a nuclear war in Ukraine: a hypothetical scenario

Atomic Pulse

Blundering into a nuclear war in Ukraine: a hypothetical scenario

Vladimir Putin’s invasion of Ukraine is a prime example of a regional conflict that could inadvertently escalate beyond any of the protagonists’ expectations. History is replete with similar instances of humanity stumbling into devastating conflict.


NTI Seminar: Chris Painter on Avoiding Nuclear Escalation from Cyberattacks

Atomic Pulse

NTI Seminar: Chris Painter on Avoiding Nuclear Escalation from Cyberattacks

What happens “if we can’t rely on the information we have,” asks Christopher Painter, former top U.S. cyber diplomat. In an NTI seminar on January 25, Painter posed this critical question and discussed a range of issues at the intersection of cyber and nuclear security.


See All

Close

My Resources