Risky Business
How to prevent AI-enabled bioterrorism
AI biodesign tools offer many beneficial uses, from engineered crops to vaccine development, but tools that can engineer biological agents could also be misused to cause harm.
Without proper safeguards, these tools might inadvertently aid bioterrorists by enabling them to design pathogens that are more lethal, transmissible, or able to evade current countermeasures because of their novelty. Adequate controls are necessary to prevent these AI capabilities from enabling deliberate harm.
The Emerging AI-Bio Regulatory Landscape
The first steps towards regulation that would reduce AI-enabled biosecurity risks beyond voluntary commitments are underway but remain incomplete.
In 2023, the Biden Administration issued an Executive Order on AI that mandates safety reporting for highly capable AI systems. That means that developers are required to screen for and disclose potential risks. The Executive Order puts specific focus on biological models: Models trained primarily on biological data are subject to reporting requirements at a threshold three orders of magnitude below the minimum training compute threshold for any other model.
Similarly, the European Union requires the disclosure of safety information for generative AI, which includes biological design tools. The AI Act also obligates developers to ensure their models cannot generate illegal content, which could include information about bioweapon development.
Many AI developers take the risks of AI-enabled bioterrorism seriously. Companies like OpenAI and Anthropic are voluntarily screening their models for biosecurity risks, despite the tools not being specifically designed for biological applications.
Blind Spots
While a step in the right direction, current regulatory approaches have two key shortcomings: early risk assessments overlook the uniqueness of biological design tools and practical safeguards to limit dual-use capabilities are unavailable.
Ideally, risks could be assessed before a model is developed or released. This would allow developers and regulators alike to introduce safeguards against misuse from the start or even halt the training or release of a model.
To determine which models could become highly capable and thus pose misuse risks, regulators evaluate the number of computations used during the training process of a model. Increases in the so-called “training compute” are generally associated with more capable generative AI. This premise has been proven true time and time again, making compute a popular proxy for highly capable, potentially dangerous models.
However, specialized biological design tools might slip through this safety net. Given the narrower applications of and training data for biological design tools compared to current frontier Large Language Models such as ChatGPT, less computing power is needed to generate highly capable, dual-use biological design tools as compared to large language models. By only looking at training compute for early assessment, safeguards may come only once the time risks become apparent, which may be too late.
A second problem is that, once risky models have been identified, current guidance is silent on recommended actions to limit dangerous capabilities. One approach to address the misuse of dangerous capabilities would be a general halt of the training or release of any new AI model more powerful than the existing ones, as a popular open letter suggested in 2023. However, given the tremendous benefits they offer this heavy-handed approach is unlikely to be politically feasible or desirable. More nuanced safeguards will be necessary to advance the beneficial development of AI for biology.
Future Safeguards
To stay ahead, policymakers and developers must consider additional safeguards along two lines.
- Look beyond pure computing power. Policy makers should consider the amount and kind of data input in biological design tools to make better informed estimates of these tools’ capabilities and risks before they are deployed. This would help ensure that biosecurity concerns are addressed early on, before any potential misuse can occur.
- Provide guidance on potential guardrails for biological design tools. This could include screening user requests for misuse concerns and restricting access to legitimate users only. Such proactive integration would make these tools inherently safer to use, a recent NTI | bio report recommends.
AI-powered biological design tools present remarkable opportunities for innovation but also carry significant risks if misused. To harness their potential while minimizing dangers, policymakers and developers need to improve current safeguards by anticipating risks early and embedding biosecurity measures directly into these technologies. This proactive approach enables greater security while supporting scientific innovations which are beneficial to society.
Stay Informed
Sign up for our newsletter to get the latest on nuclear and biological threats.
More on Risky Business
NTI | bio Advances International Dialogue on AI Biosecurity Through Expert Working Groups
NTI | bio convened two technical working groups this fall to address critical challenges at the intersection of AI and the life sciences, bringing together more than 50 international experts from major AI companies, academic institutions, and biosecurity organizations.
Biosecurity by Design: Getting Ahead of Risk in the World of Designer Organisms
Biosecurity by Design: Getting Ahead of Risk in the World of Designer Organisms
Exploring AI-Biosecurity Governance in the Global South
Gaps in healthcare infrastructure and biosecurity capabilities among Global South countries make these regions particularly vulnerable to biological threats, both natural and man-made.