Sarah R. Carter, Ph.D.
Principal, Science Policy Consulting LLC
Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists’ abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment.
Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities—AI tools and technologies that enable the engineering of living systems—also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe.
These tools could expand access to knowledge and capabilities for producing well-known toxins, pathogens, or other biological agents. Soon, some AI-bio capabilities also could be exploited by malicious actors to develop agents that are new or more harmful than those that may evolve naturally. Given the rapid development and proliferation of these capabilities, leaders in government, bioscience research, industry, and the biosecurity community must work quickly to anticipate emerging risks on the horizon and proactively address them by developing strategies to protect against misuse.
The Research
To address the pressing need to govern AI-bio capabilities, this report explores three key questions:
This report draws on interviews with more than 30 experts in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies to summarize the risks associated with this novel technology.
Recommendations
The report authors offer six recommendations for urgent actions that leaders within government, industry, the scientific community, and civil society should take to safeguard AI-bio capabilities:
These recommendations provide a proposed path forward for taking action to reduce biological risks associated with rapid advances in AI-bio capabilities, without unduly hindering scientific advances. Effectively implementing them will require creativity, agility, and sustained cycles of experimentation, learning, and refinement.
The world faces significant uncertainty about the future of AI and the life sciences, but addressing these risks requires urgent action, unprecedented collaboration, and international engagement.
Read the press release and download the full report.
The launch of this report was part of AI Fringe and hosted on the margins of the UK government AI Safety Summit hosted by Prime Minster Rishi Sunak. This report helped inform and was referenced in the AI Safety Summit discussion paper, Capabilities and Risks from Frontier AI.
Sign up for our newsletter to get the latest on nuclear and biological threats.
Practical solutions for organizations involved in life science research to manage risks and prevent accidents, misuse, and other adverse outcomes stemming from their work.
In a new opinion piece for The Economist, Jaime Yassif makes the case for urgent action to create stronger guardrails for bioscience and biotechnology.
The first comprehensive assessment of global health security capabilities in 195 countries.