Response by NTI to the U.S. government issued request for information (RFI) related to the Artificial Intelligence (AI) Action Plan outlined by President Donald Trump’s January 2025 Executive Order.
To the Office of Science and Technology Policy:
This comment is prepared in response to the Request for Information on the Development of an Artificial Intelligence (AI) Action Plan, published in the Federal Register on February 6, 2025. This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.
At NTI | bio, we work to strengthen biosecurity and to reduce risks related to advances in biotechnology and the broader life sciences. Since 2023, we have tracked the convergence of AI with the life sciences, its implications for biosecurity, technical solutions for risk mitigation, and options for governance. The use of AI in the life sciences offers tremendous benefits for society, including advancing innovative therapeutics and vaccines, supporting early detection of infectious disease threats, and bolstering the broader bioeconomy. At the same time, these advances could also increase the risk of deliberate or accidental release of harmful biological agents, including those that could cause a global biological catastrophe. Our previous work has emphasized the need for model developers, governments, biosecurity experts, and others to work together to develop effective safeguards to protect these technologies from misuse.
In addition to preventing biological catastrophe, safeguards for AI models are critical to the competitiveness of U.S. industry and products on the global stage. As technology progresses, the U.S. can maintain its leadership in AI by supporting governance approaches that harness the benefits of AI while guarding against downside risks. The Trump Administration should pursue several key priorities to achieve these goals:
- Bolster the OSTP framework for nucleic acid synthesis screening. This framework, released in April 2024, requires that laboratories that receive federal funding only purchase synthetic nucleic acids from providers that conduct biosecurity screening. Nucleic acid synthesis is a key foundation of modern molecular biology, and screening by providers helps prevent illegitimate actors from accessing nucleic acid sequences that encode dangerous toxins and pathogens. This screening is particularly important as AI advances because nucleic acid synthesis is a key bottleneck at the interface where digital information and designs become biological reality. OSTP implemented screening requirements in response to Section 4.4 of the October 2023 Executive Order 14110. However, this framework builds on longstanding voluntary commitments by the nucleic acid synthesis industry under the International Gene Synthesis Consortium and on U.S. government guidance issued in 2010 and updated in 2023. Although the January 2025 Executive Order 14179 rescinded the previous Executive Order, it will be critical to support and improve upon the new screening requirements, including through a commitment to update the framework every two years to ensure that it remains effective in the face of rapidly advancing technological progress both in biotechnology and in AI.
- Elevate the U.S. AI Safety Institute (AISI). The AISI was established within the National Institute for Standards and Technology in early 2024 and has been critical for evaluating and mitigating risks, including biological risks, posed by AI. It does not establish regulatory requirements and instead works closely with industry and other model developers to improve safety evaluations and develop risk mitigation strategies. In its efforts to reduce biosecurity risks, a key challenge for the AI industry has been a lack of expertise in virology, molecular biology, and other technical areas in the life sciences. The AISI can help bring these experts together to ensure that safety evaluations are meaningful, that risk mitigations are targeted and effective, and that companies can efficiently safeguard their technologies to remain competitive. Ensuring that AI is not exploited to engineer a dangerous biological agent is in the vital national security interests of the United States.
- Support research on risk mitigations or guardrails for AI models trained on biological data and for the use of AI “agents” in the life sciences. Some AI models are trained specifically on biological data–often referred to as biological AI tools–and are intended to provide insight, predictions, and designs related to biology. As these models are increasingly capable of designing novel biological molecules not found in nature, it is possible that future models could be misused to design pathogens more harmful than any in our history. Additionally, the capabilities of AI “agents” are progressing rapidly and are being applied to the life sciences, including in laboratories with advanced robotics. It is possible that these agents may pursue scientific advances in ways that are unpredictable and unsafe, or they could be deliberately tasked to achieve harmful outcomes. These biological AI tools and AI agents should receive particular consideration because they are likely to yield important benefits for society, but may also pose significant biosecurity risks. However, methods for evaluating emerging biological risks related to these biological AI tools and AI agents–and options for limiting these risks–remain underdeveloped. It will be important to pursue an aggressive research agenda to identify guardrails that have the potential to address this critical gap, and to work with model developers to pilot promising approaches that can meaningfully reduce biological risks while enabling innovation that advances human health and promotes U.S. interests at home and abroad.
- Lead efforts to advance AI safety and nucleic acid synthesis screening to reduce global biological risks affecting U.S. interests. Development of powerful AI models and provision of nucleic acid synthesis services both occur in many countries around the world. To reduce biological threats that can emerge overseas and affect U.S. national security and economic interests, the U.S. government should take a leadership role in international efforts to advance AI safety and security and to promote robust nucleic acid synthesis screening. In addition to protecting U.S. public health, such efforts can also ensure a level playing field in the global market for U.S. AI model developers and nucleic acid synthesis providers.
By incorporating these priority areas into the AI Action Plan, the U.S. can safeguard these critical technologies against misuse, ensure the competitiveness of its industry, and maintain its global leadership in AI.