Comments of the R Street Institute’s Cybersecurity and Emerging Threats Team in Request for Information on the Development of an Artificial Intelligence (AI) Action Plan
March 15, 2025
Office of Science and Technology Policy
2415 Eisenhower Ave.
Alexandria, VA 22314
ostp-ai-rfostp-ai-rfi@nitrd.govi@nitrd.gov
Re: Request for Information on the Development of an Artificial Intelligence (AI) Action Plan, Federal Register Number 2025-02305
Submitted Electronically
Comments of the R Street Institute’s Cybersecurity and Emerging Threats Team in
Request for Information on the Development of an Artificial Intelligence (AI) Action Plan
I. Overview of Comments
We appreciate the opportunity to respond to the White House’s Request for Information (RFI) on the Development of an Artificial Intelligence (AI) Action Plan.
As a nonpartisan, nonprofit public policy research organization headquartered in Washington, D.C. and focused on promoting free markets and limited, effective government, the R Street Institute (RSI) shares the Trump administration’s view that “artificial intelligence (AI) will have countless revolutionary applications in economic innovation, job creation, national security, healthcare, free expression, and beyond.”[1]
We recognize the crucial steps this administration has taken within the first few months of its term to reassert America’s leadership in AI and technological innovation. Notably, the rescission of President Joe Biden’s 2021 Executive Order (EO) 14110—which imposed heavy-handed regulations on private-sector AI advancement and had numerous cybersecurity implications—signals a welcome return to an open regulatory environment and pro-innovation policies.[2] President Donald J. Trump’s EO 14179 ensures that unnecessary compliance burdens do not unduly hinder private investment in AI.[3] This effort builds on President Trump’s first-term AI legacy, particularly EO 13859, which committed federal resources to AI research and development (R&D), established AI research institutes, and provided regulatory guidance to ensure AI remained an engine of U.S. economic and national security growth.[4]
Furthermore, President Trump’s recent announcement of the Stargate joint venture—a private-sector-led initiative with government support expected to drive up to $500 billion in private investment toward AI infrastructure across America—represents a landmark effort.[5] By accelerating domestic AI infrastructure development through regulatory assistance and policy backing, Stargate is poised to strengthen America’s technological foundation and competitive edge in the ongoing global AI arms race. This commitment, largely driven by private companies with the Trump administration’s encouragement, ensures the United States continues to lead in advanced AI capabilities, semiconductor manufacturing, and next-generation computing, thereby securing our long-term technological leadership and economic resilience.
Vice President JD Vance reinforced this vision at the Paris AI Summit in February 2025, emphasizing the urgent need to “look to this new frontier with optimism rather than trepidation.”[6] He contended that “…restrict[ing] [AI’s] development now, when it is just beginning to take off, would not just unfairly benefit incumbents in the space but would mean paralyzing one of the most promising technologies we have seen in generations.”[7] His remarks underscore President Trump’s commitment to viewing AI as an opportunity rather than a risk—an approach that fosters technological innovation while ensuring national security and economic growth.
Notably, Vice President Vance also challenged the growing calls to retreat from AI development in the name of safety, arguing that “the AI future will not be won by hand wringing about safety; it will be won by building—from reliable power plants to the manufacturing facilities that can produce the chips of the future.”[8] While AI “safety” concerns have often been used to justify overly cautious or restrictive policies, Vance’s remarks distinguish between excessive precaution and the imperative of AI security. Rather than stifling innovation, robust AI security is an essential foundation for America’s AI success and leadership, ensuring that AI-driven advances are reliable, trusted worldwide, scalable, and resistant to exploitation. As the Trump administration rightfully acknowledges, maximizing AI’s potential requires both technological ambition and a security-first approach that strengthens our resilience while fostering economic prosperity.
In alignment with this vision, the RSI’s Cybersecurity and Emerging Threats (CSET) team—which focuses on the national security implications of individual, business, and government cyber risk—urges the development of an AI Action Plan that prioritizes three key areas:
- Strengthening both AI security and our nation’s cybersecurity through AI-driven defense capabilities
- Establishing a balanced data-privacy framework that protects consumers without stifling innovation
- Maintaining America’s dominance in AI and technological innovation on the global stage.[9]
Although this comment focuses strictly on the privacy and cybersecurity issues related to AI, our RSI colleague Adam Thierer has submitted a separate comment addressing broader AI innovation and governance considerations.[10]
II. Cybersecurity
Ongoing advances in AI are already transforming the cybersecurity landscape for both defenders and adversaries. On one hand, AI-driven tools can compress incident analysis from minutes to milliseconds and even identify novel threats through predictive intelligence; on the other hand, malicious actors can easily exploit the same tools. For example, cybercriminal groups and advanced persistent threats from China, Iran, Russia, and North Korea have already used generative AI services to “translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.”[11]
Beyond these direct threats, foreign-developed AI models present additional cybersecurity concerns, particularly as adversarial nations increase their investment in open-weight AI systems with poor security guardrails. For instance, the release of DeepSeek’s R1 model in January 2025 exposed grave cybersecurity failures, including jailbreaking vulnerabilities and leaked chat histories, raising alarms about how unchecked foreign AI deployments could be exploited to collect data, spread disinformation, or facilitate cyberattacks.[12] To mitigate these risks, the United States must maintain leadership in AI development while exploring necessary restrictions on foreign AI models like DeepSeek and ensuring that critical AI infrastructure components do not fall into the hands of adversaries. Strengthening domestic AI capabilities is not just a matter of economic competitiveness—it is a national security imperative.
Over the past two years, RSI’s CSET team has brought together experts from academia, industry, civil society, and government to examine the intersection of AI and cybersecurity.[13] Our findings underscore AI’s growing role in offensive and defensive cyber operations, along with its vast potential for national security applications.[14] However, to harness these AI benefits fully, the Trump administration’s AI Action Plan must pursue a balanced and risk-based approach that mitigates legitimate and emerging cybersecurity threats without imposing restrictive regulations that undermine AI’s role and opportunity in cybersecurity. The following recommendations outline the unique role that federal policy can and should play in strengthening our national cyber resilience and protecting our AI innovation, which is especially important as adversarial countries like China seek to become the world’s AI leader.
- Address “gray areas” in AI development
Given the rapid evolution of AI in cybersecurity, the AI Action Plan would be well positioned to clarify significant policy and compliance ambiguities. The plan should issue targeted guidance to provide clarity on gray areas in AI development and deployment, from acceptable methods of AI-driven security research to risk-management expectations for AI deployments. For example, the National Institute of Standards and Technology (NIST) and the Department of Homeland Security’s Cybersecurity and Infrastructure Agency (CISA) should define permissible actions for security researchers using AI.[15] Clearer guidelines can both define and support the implementation of AI-driven vulnerability testing, red teaming, and threat hunting across public and private sectors, particularly for smaller and less-resourced entities that may lack the expertise to conduct such assessments effectively.[16] For example, standardized frameworks and toolkits could provide systematic guidance on AI red teaming, outlining best practices and permissible activities and ensuring that even organizations without dedicated cybersecurity teams could identify and mitigate AI-related threats. By removing legal uncertainty and establishing clear guardrails, these guidelines would empower researchers to strengthen AI security without fear of liability.
Additionally, the Trump White House should direct NIST to establish risk tolerance parameters and best practices for AI in security.[17] This guidance would help federal agencies and the private sector gauge how much risk is acceptable when deploying AI into mission-critical systems, guiding decisions on where human oversight or traditional controls might remain necessary. Moreover, the Department of Defense, in coordination with the National Security Agency, could also provide direction by expanding offensive cyber capabilities that leverage AI, thereby improving deterrence and clarifying the U.S. government’s role in preempting AI-enabled threats.[18]
- Prioritize industry-specific frameworks
AI’s cybersecurity risks vary greatly across critical infrastructure sectors and industries, rendering a one-size-fits-all approach impractical. The AI Action Plan should direct CISA, in partnership with sector-specific agencies like the U.S. Food and Drug Administration, to develop tailored AI security frameworks for key sectors including energy, finance, and transportation.[19] Each sector’s unique risk profile, from supply chain vulnerabilities to operational safety requirements, demands tailored guidance. For example, an AI system managing a power grid substation faces grid-disruption threats distinct from those of an AI system analyzing medical records or detecting financial fraud.[20]
These frameworks should not serve as compliance exercises; rather, they should provide a practical roadmap and best practices for sector-specific risk assessment and mitigation, developed in collaboration with industry stakeholders. They must also prioritize proactive defenses against AI-related vulnerabilities, such as adversarial attacks, data poisoning, and weaponization.[21] By ensuring these frameworks remain flexible and adaptive, America can safeguard critical systems while continuing to lead in AI innovation. Additionally, the United States should actively promote these security-driven AI governance principles on the global stage, countering efforts by foreign actors to impose restrictive “AI safety” measures that could undermine innovation and tilt the competitive landscape in their favor. Establishing American-led security standards would reinforce both domestic resilience and American leadership in shaping the future of AI governance.
- Promote responsible AI use
Promoting responsible AI use means leveraging AI’s full potential for cybersecurity and other applications while ensuring that risks are managed pragmatically. Emerging AI applications like digital twins (virtual models that can simulate cyber threats and responses) offer powerful ways to test system resilience and should be actively deployed, with their benefits weighed against their potential limitations and risks.[22] Rather than leaning into panic over exaggerated fears, the AI Action Plan should prioritize evidence-based threats, such as adversarial attacks on AI models or data breaches. To support this, CISA should issue voluntary, use-case-specific guidelines that help end users distinguish real and likely security risks from hype and ensure the clear promotion of balanced AI security techniques. Furthermore, continued investment in AI-driven cybersecurity R&D is essential. Many of the most novel security solutions have emerged from small AI companies, and the AI Action Plan should explore ways to support industry leaders in advancing AI-driven cyber defense technologies. This risk-based and innovation-friendly approach aligns with the Trump administration’s goal to minimize industry burdens while maximizing opportunities for AI-driven solutions.
As the United States strengthens its AI security policies, we must recognize that adversaries will continue to misuse AI regardless of any restrictions imposed domestically. While guardrails are necessary, the United States cannot afford to hamstring itself with overly cautious policies that limit innovation while foreign actors rapidly advance their own AI capabilities. A balanced approach that mitigates real security threats while preserving AI’s role as a strategic asset for national defense and economic competitiveness is critical.
The AI Action Plan must take a risk-based approach that addresses probable security challenges while ensuring that the United States remains at the forefront of AI-driven cybersecurity. Policymakers should recognize that overly broad restrictions could weaken security rather than enhance it. While concerns persist about AI deployments in critical infrastructure, restrictions should not unintentionally hinder well-established AI applications that have long bolstered cybersecurity, such as anomaly-detection products that leverage machine learning. To maintain leadership in AI, the United States must balance proactive security measures with the flexibility to leverage AI as a strategic asset—ensuring resilience against emerging threats without imposing unnecessary constraints on innovation.
III. Data Privacy and Security
Data is the lifeblood of AI innovation and development, and AI requires both more and better data. However, without robust privacy and security safeguards, this data can become an attack vector targeted by adversaries to harm and exploit Americans.[23] To protect individuals and promote trust in AI products and services, the AI Action Plan should prioritize the following actionable recommendations.
- Enact comprehensive federal privacy and data security provisions
The United States’ lack of a federal privacy law makes it an outlier among developed nations. Instead, we rely on inconsistent state requirements that leave Americans either unprotected or under-protected.[24] The current patchwork of about 20 state privacy laws forces industry to follow varying requirements, and the emergence of AI-specific state and local laws is only making things worse. The AI Action Plan should articulate support for a clear national data privacy standard that would ensure all Americans have baseline protections and provide businesses with one set of rules to follow.[25] Any guidance or rules should recognize that data uses, available protections, and privacy implications may vary between the development/training phase and the products and application phase.
Such a law should rely on strong preemption, and it should include balanced enforcement mechanisms that cannot be abused. However, it is critical for any privacy action to remain focused on privacy without adding AI-specific provisions and to ensure that broader privacy provisions do not inadvertently or unnecessarily limit AI or data requirements, such as rigid data-minimization rules. After all, AI is only one form of technology, and privacy rules should apply across all types. It would help to look at Texas and other states with existing privacy laws for inspiration rather than the European Union and its overreaching efforts.
Additionally, a privacy law should include data-security requirements. The only current requirements are sector-specific, which means some holders of sensitive data likely safeguard it inadequately. This is amplified by the fact that countries like China have an interest in stealing Americans’ data for nefarious purposes and can leverage AI to quickly make sensitive inferences, such as who might be an intelligence asset.[26]
- Leverage AI for data security
While AI can introduce new privacy challenges, it can also be a powerful tool for strengthening data security and compliance.[27] AI-driven systems can automatically scan, classify, and secure vast data stores, ensuring that sensitive information is mapped, protected, or deleted in accordance with privacy regulations.[28] These capabilities enhance compliance monitoring by detecting unauthorized data sharing or improper retention far more efficiently than manual reviews. Additionally, emerging techniques like AI-powered anomaly detection can help organizations proactively identify security threats, thereby preventing data breaches before they occur.[29] Given AI’s ability to enhance both defensive security measures and regulatory compliance, the AI Action Plan should encourage its responsible use in securing sensitive data.
- Promote privacy-enhancing technologies in AI development
Beyond improving security, AI can actively enhance privacy protections when
implemented responsibly. Privacy-enhancing technologies (PETs) like differential privacy, federated learning, and homomorphic encryption allow organizations to extract insights from data while preserving privacy, ensuring that personal information remains protected even as AI models learn from it.[30] To drive broader adoption of PETs, the AI Action Plan should support public–private partnerships, issue guidance, and provide targeted incentives—including federal grant programs, research funding, and workforce training incentives—to advance their development.[31] The AI Action Plan should also consider incorporating safe-harbor provisions or liability protections for organizations that adopt PETs in good faith, ensuring that companies are encouraged to implement privacy-first AI solutions without excessive legal exposure. By integrating PETs into AI systems, such as automated data anonymization or encrypted computation, researchers and developers can continue to innovate while ensuring that privacy remains a fundamental principle of AI development and deployment.[32]
Each of these recommendations aligns with the Trump administration’s vision to foster innovation while “ensuring that all Americans benefit from the technology [AI] and its transformative potential.”[33] Rather than impose rigid, AI-specific rules that might inadvertently hinder technological progress, a comprehensive privacy provision with risk-based safeguards would provide clear guardrails for data use and consumer protections.
IV. Open-Source AI
Open-source AI has quickly emerged as a driving force of innovation, fostering collaboration and expanding access to advanced AI tools. By making model code and weights publicly available, open-source AI enables researchers, startups, and large firms to build upon shared advances rather than developing systems and features from scratch.[34] This approach has fueled a competitive AI ecosystem in which breakthroughs accelerate through collective contributions.
The line between open-source and proprietary AI is blurring, with major tech companies integrating open models into their development pipelines.[35] As community-driven improvements rapidly enhance AI’s benchmark performance, open-source AI is poised to rival— or even surpass—proprietary models.[36] Given its strategic importance to America’s technological leadership, the AI Action Plan should embrace open-source AI development and address potential cybersecurity and governance challenges. To achieve this balance, we recommend the following policy provisions.
- Encourage secure deployment over blanket bans
Recent incidents, such as the DeepSeek-R1 model’s leaked data and jailbreaking vulnerabilities, highlight the need for basic cybersecurity hygiene in open-source projects.[37] However, they also show that open models can be used safely if placed in controlled environments.[38] For example, Microsoft quickly sandboxed DeepSeek-R1 on isolated servers with strict access controls through its Azure AI Foundry platform, allowing researchers to experiment with it without exposing sensitive data.[39] The AI Action Plan should promote similar strategies—such as running open-source AI models on air-gapped systems, using sandboxed environments, and monitoring for anomalies—to scale America’s AI innovation and advancement while minimizing risks.[40]
- Establish clear guidelines for open-source AI development and deployment
The AI Action Plan should also establish voluntary, risk-based best-practice guidelines for the secure development and deployment of open-source AI models. These guidelines could include measures like rigorous pre-release testing, transparency in model provenance, and additional safeguards for high-risk applications, such as deploying AI systems in critical infrastructure.[41] By providing an industry-aligned cybersecurity checklist instead of pursuing licensing, certification requirements, or other heavy-handed regulation, this approach would improve accountability and resilience in open-source AI without hampering its benefits.
- Incorporate tiered liability protection provisions for open-source AI
The AI Action Plan should consider incorporating liability protections that correspond with the risk levels associated with different types of open-source AI projects and applications.[42] Under this provision, developers of lower-risk models, such as tools for educational purposes, could benefit from broader liability shields that encourage innovation while limiting their legal exposure in cases of third-party misuse.[43] This approach would protect developers by offering clearer legal boundaries and reducing uncertainty.[44]
V. Conclusion
The AI Action Plan must prioritize policies that strengthen AI-driven cybersecurity, establish a balanced privacy framework, and ensure that the United States maintains its leadership in AI development and technological innovation for generations to come. We are happy to be a resource and stand ready to collaborate with policymakers to shape AI and emerging technology policies that promote innovation, cybersecurity, and economic growth.
Respectfully submitted,
Brandon Pugh
Policy Director, Cybersecurity and Emerging Threats
R Street Institute
Haiman Wong
Fellow, Cybersecurity and Emerging Threats
R Street Institute
This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.
[1] JD Vance, “Remarks by the Vice President at the Artificial Intelligence Summit in Paris, France,” The American Presidency Project, Feb. 11, 2025. https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.
[2] “Executive Order on Removing Barriers to American Leadership in Artificial Intelligence,” The White House, Jan. 23, 2025. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence; Brandon Pugh and Amy Chang, “Cybersecurity Implications of the White House’s AI Executive Order,” R Street Institute, Oct. 31, 2023. https://www.rstreet.org/commentary/cybersecurity-implications-of-the-white-houses-ai-executive-order.
[3] Ibid.
[4] “Executive Order on Maintaining American Leadership in Artificial Intelligence,” Trump White House Archives, Feb. 11, 2019.
[5] Steve Holland, “Trump announces private-sector $500 billion investment in AI infrastructure,” Reuters, Jan. 21, 2025. https://www.reuters.com/technology/artificial-intelligence/trump-announce-private-sector-ai-infrastructure-investment-cbs-reports-2025-01-21.
[6] Vance. https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.
[7] Ibid.
[8] Vance. https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.
[9] “Cybersecurity and Emerging Threats,” R Street Institute, last accessed March 5, 2025. https://www.rstreet.org/home/our-issues/cybersecurity-and-emerging-threats.
[10] Adam Thierer, “Comments of the R Street Institute in Request for Information on the Development of an Artificial Intelligence (AI) Action Plan,” R Street Institute, March 15, 2025. https://www.rstreet.org/outreach/comments-of-the-r-street-institute-in-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.
[11] “Disrupting malicious uses of AI by state-affiliated threat actors,” OpenAI, Feb. 14, 2024. https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors.
[12] Haiman Wong, “DeepSeek’s cybersecurity failures expose a bigger risk. Here’s what we really should be watching,” R Street Institute, Feb. 4, 2025. https://www.rstreet.org/commentary/deepseeks-cybersecurity-failures-expose-a-bigger-risk-heres-what-we-really-should-be-watching.
[13] “R Street Cybersecurity-Artificial Intelligence Working Group,” R Street Institute, last accessed March 5, 2025. https://www.rstreet.org/home/our-issues/cybersecurity-and-emerging-threats/cyber-ai-working-group.
[14] Ibid.
[15] Haiman Wong and Brandon Pugh, “Key Cybersecurity and AI Policy Priorities for Trump’s Second Administration and the 119th Congress,” R Street Institute, January 2025. https://www.rstreet.org/research/key-cybersecurity-and-ai-policy-priorities-for-trumps-second-administration-and-the-119th-congress.
[16] Ibid.
[17] Ibid.
[18] Ibid.
[19] Ibid.
[20] Ibid.
[21] Ibid.
[22] Ibid.
[23] Brandon Pugh and Steven Ward, “What does AI need? A comprehensive federal data privacy and security law,” IAPP, July 12, 2023. https://iapp.org/news/a/what-does-ai-need-a-comprehensive-federal-data-privacy-and-security-law.
[24] Brandon Pugh and Steven Ward, “Key Data Privacy and Security Priorities for 2025,” R Street Institute, January 2025. https://www.rstreet.org/research/key-data-privacy-and-security-priorities-for-2025.
[25] Ibid.
[26] Testimony of Brandon J. Pugh, Esq., House Committee on Energy and Commerce, “Hearing on Economic Danger Zone: How America Competes to Win the Future Versus China,” 118th Congress, February 2023. https://d1dth6e84htgma.cloudfront.net/Brandon_Pugh_Testimony_020123_Hearing_36ecfd8b92.pdf?updated_at=2023-02-01T14:31:57.744Z.
[27] Testimony of Brandon J. Pugh, Esq., Bipartisan Task Force on Artificial Intelligence United States House of Representatives, “Hearing on Privacy, Transparency, and Identity,” 118th Congress, June 28, 2024. https://www.rstreet.org/outreach/brandon-pugh-testimony-hearing-on-privacy-transparency-and-identity.
[28] Pugh and Ward. https://www.rstreet.org/research/key-data-privacy-and-security-priorities-for-2025.
[29] Steven Ward, “Leveraging AI and Emerging Technology to Enhance Data Privacy and Security,” R Street Policy Study No. 317, March 2025, p. 2. https://www.rstreet.org/research/leveraging-ai-and-emerging-technology-to-enhance-data-privacy-and-security.
[30] Ibid.
[31] Ibid.
[32] Ibid.
[33] Vance. https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.
[34] Ben Brooks, “Open-Source AI Is Good for Us,” IEEE Spectrum, Feb. 8, 2024. https://spectrum.ieee.org/open-source-ai-good.
[35] Ibid.
[36] Ibid.
[37] Wong. https://www.rstreet.org/commentary/deepseeks-cybersecurity-failures-expose-a-bigger-risk-heres-what-we-really-should-be-watching.
[38] Ibid.
[39] Ibid.
[40] Ibid.
[41] Ibid.
[42] Ibid.
[43] Ibid.
[44] Ibid.