Report

The Convergence of Artificial Intelligence and the Life Sciences

About the image

The Convergence of Artificial Intelligence and the Life Sciences

Sarah R. Carter, Ph.D.

Principal, Science Policy Consulting LLC

Nicole Wheeler, Ph.D.

Turing Fellow, The University of Birmingham

Sabrina Chwalek

Technical Consultant, NTI | bio

The Convergence of Artificial Intelligence and the Life Sciences

Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists’ abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment.

Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities—AI tools and technologies that enable the engineering of living systems—also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe.

These tools could expand access to knowledge and capabilities for producing well-known toxins, pathogens, or other biological agents. Soon, some AI-bio capabilities also could be exploited by malicious actors to develop agents that are new or more harmful than those that may evolve naturally. Given the rapid development and proliferation of these capabilities, leaders in government, bioscience research, industry, and the biosecurity community must work quickly to anticipate emerging risks on the horizon and proactively address them by developing strategies to protect against misuse.

The Research

To address the pressing need to govern AI-bio capabilities, this report explores three key questions:

  • What are current and anticipated AI capabilities for engineering living systems?
  • What are the biosecurity implications of these developments?
  • What are the most promising options for governing AI-bio capabilities, to effectively guard against misuse while enabling beneficial applications?

This report draws on interviews with more than 30 experts in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies to summarize the risks associated with this novel technology.

Recommendations

The report authors offer six recommendations for urgent actions that leaders within government, industry, the scientific community, and civil society should take to safeguard AI-bio capabilities:

  • Establish an international “AI-Bio Forum” to develop AI model guardrails that reduce biological risks
  • Develop a radically new, more agile approach to national governance of AI-bio capabilities
  • Implement promising AI model guardrails at scale
  • Pursue an ambitious research agenda to explore additional AI guardrail options
  • Strengthen biosecurity controls at the interface between digital design tools and physical biological systems
  • Use AI tools to build next-generation pandemic preparedness and response capabilities

These recommendations provide a proposed path forward for taking action to reduce biological risks associated with rapid advances in AI-bio capabilities, without unduly hindering scientific advances. Effectively implementing them will require creativity, agility, and sustained cycles of experimentation, learning, and refinement.

The world faces significant uncertainty about the future of AI and the life sciences, but addressing these risks requires urgent action, unprecedented collaboration, and international engagement.

Read the press release and download the full report.

The launch of this report was part of AI Fringe and hosted on the margins of the UK government AI Safety Summit hosted by Prime Minster Rishi Sunak. This report helped inform and was referenced in the AI Safety Summit discussion paper, Capabilities and Risks from Frontier AI.

Stay Informed

Sign up for our newsletter to get the latest on nuclear and biological threats.

Sign Up




Close

My Resources