Brandon Pugh, Policy Director, Cybersecurity and Emerging Threats at R Street Institute; Non-resident Fellow, Army Cyber Institute at the U.S. Military Academy at West Point: 

The National Security Memorandum (NSM) on AI correctly recognizes that leveraging AI in the national security arena is critical. While AI has been leveraged by the national security community for years to some degree, it is positive to see this push continue and to evolve with both defense and offensive applications. At the same time, efforts to secure America’s leadership position on AI are timely given the increasing desire of adversaries to surpass us and potentially leverage the technology in nefarious ways.

A common theme throughout the NSM is adhering to guardrails and following transparency, evaluation, and risk management requirements. This stems from concerns about how malicious actors might target the technology and how it could be misused contrary to U.S. values. However, the technology presents an opportunity to leverage it to combat these very concerns, like using it to defend against adversaries and to protect the values some feel might be undermined. In fact, there are already examples of both.

Many of the requirements will be coordinated and acted on by a non-national security entity under the Commerce Department. There is a role for these activities, but we must ensure an appropriate balance is maintained with the need to innovate and lead on AI in a responsible manner centered. Competitors, including China, will not respect U.S. limitations or guardrails, so we need to be careful to not go too far in restricting efforts and industry in the United States, while still ensuring we are using AI responsibly.

For instance, the NSM envisions an accompanying “Framework to Advance AI Governance and Risk Management in National Security” that identifies prohibited uses for AI and high-impact use cases that require stricter oversight and due diligence (section 4.2(e)). These classifications should be reviewed initially and assessed continuously to ensure potential national security uses are not unduly limited, especially since the technology is rapidly evolving and it is hard to appreciate all advantages and disadvantages at the present time. However, the framework intentionally will be separate from the NSM to make changes easier.

An overarching question is how actions like the October 2023 Executive Order (EO) and this NSM would fare under a new administration and Congress, especially if there is a party shift. Should there be a Republican administration, repeals have already been called for. Republican concerns about the EO and other efforts to regulate AI have largely centered around requirements that are too burdensome, permit agency overreach, or hamper innovation. The precise way a future administration will handle AI is unclear, but there  seemingly are points of agreement between the parties, including the central role of the private sector in AI development, the need to leverage AI for national security, and the potential for misuse.