Implications of Biden’s national security memorandum for artificial intelligence
This analysis is in response to breaking news and will be updated. Please contact pr@rstreet.org to speak with the author.
The White House released the unclassified version of its National Security Memorandum (NSM) pertaining to artificial intelligence (AI) today after remarks by National Security Advisor Jake Sullivan. Broadly speaking, this action seeks to govern the use of AI in “national security systems,” continue the United States’ leadership of AI development and use, and foster AI adoption in the national security and intelligence arenas. These aims are important, but it is critical to carefully assess the NSM and the accompanying guidance document to ensure the use of AI in national security is not unduly limited and that it maximizes the ability of the United States to lead on AI in a responsible manner.
The NSM is a product of the Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence released on Oct. 30, 2023, which some criticized as too heavy-handed. One part of the EO called for an interagency process to develop the NSM around AI adoption and AI use by adversaries within 270 days. Cybersecurity and broader national security implications of the EO were previously explored in an R Street analysis, and R Street’s Cybersecurity-AI Working Group explored ways in which AI can be a positive force for cybersecurity.
There is much to unpack in the NSM, but several high-level items stand out. These are preliminary reactions, with a more detailed analysis to follow.
1. The NSM is not an isolated action. The Office of Management and Budget previously provided rules for federal government use of AI, but those rules were geared toward federal civilian agencies. The NSM is intended to complement and build upon those themes and apply to national security systems, with action steps directed at specific federal agencies and across all federal agencies. At the same time, specific agencies like the Chief Digital and Artificial Intelligence Office within the Department of Defense (DOD) have already done tremendous work on AI use and policy. Arguably, the DOD led the way on AI research well before the recent focus on AI by policymakers; thus, existing efforts should be reviewed in light of this action.
2. The NSM is only partial. It is important to remember that a classified version of this document also exists, meaning that the public cannot access the full product. Likewise, the NSM is accompanied by a governance and risk management framework intended to be more easily updated as needs evolve than the NSM itself (Section 4.2(e)(i)).
3. Maintaining US leadership is critical. The United States has many advantages in AI development and use—a central premise of the NSM. For example, a majority of leading hardware companies, AI developers, and technical talent are based in the United States. The NSM seeks to support private-sector developers with cybersecurity and counterintelligence resources and to designate the AI Safety Institute (AISI) as industry’s primary contact with the federal government, although related efforts have been criticized (Section 3.3(c)). The NSM ascribes numerous new duties to AISI, from testing and evaluation guidance to potentially determining if dual-use foundation models might harm public safety (Section 3.3(e)). AISI’s increased role in national security is something to assess and watch carefully, especially given the role the Commerce Department would play in national security applications.
4. National security uses are limited. The NSM sets out both prohibited uses of AI and high-impact use cases that require stricter oversight and due diligence (Section 4.2(e)). This guidance must be reviewed initially and assessed continuously to ensure potential national security uses are not limited, especially since the technology is rapidly evolving. Likewise, because adversaries will not respect guardrails and limits, it is important that this guidance keeps the United States from falling behind while still leveraging AI responsibility.
5. AI adoption must be a priority. National security agencies and the military have already leveraged AI, and that trend should continue as adversaries seek to do the same, but for nefarious purposes. The NSM “demands” the use of AI systems in these cases. Private-sector engagement is imperative, since much of the development has occurred within the private sector; however, it is important to remember that AI is not a panacea. Take cybersecurity, for instance, where AI can play a critical role, though humans should still be at the center.
Rivals will continue their efforts to surpass the United States in both AI development and use. They will also try to undermine U.S. efforts, which can have serious consequences for national security. This reality must be our guiding light for AI actions in both the civilian and national security arenas.