R Street report calls for federal guidelines, liability shield to advance open-source AI
A detailed report from the R Street Institute spells out policy approaches for secure development and deployment of open-source artificial intelligence systems, which the free-market think tank says is “indispensable” to U.S. leadership in global tech competition.
“If open-source AI is supported and guided intelligently, America has a unique opportunity to channel its culture of innovation and entrepreneurship into a force that strengthens national security, drives economic growth, and solidifies its position as a global leader,” according to the report by R Street resident fellow Haiman Wong, released April 17.
R Street describes itself as focused on “pragmatic” solutions to “complex public policy challenges through free markets and limited, effective government.”
Among its recommendations, according to a release, R Street calls for:
- Establishing federal guidelines to clarify legal ambiguities around AI development and best practices for the use and deployment of open-source AI models, systems, tools, and resources. These guidelines would be voluntary and adaptable.
- Fostering public–private partnerships for AI validation. This would help develop tools that could assess the safety, transparency, and reliability of open-source models.
- Implementing risk-tiered liability shields. This would give lower-risk models that are used for education, for example, broader liability protection that encourages innovation while limiting legal exposure.
- Developing embedded provenance tracking systems to enhance transparency and accountability in open-source AI development. This could allow developers to verify and audit contributions in real time, ensuring a clear history of changes and reducing the risk of tampering.
- Deploying AI-driven anomaly-detection and behavioral analysis systems. These systems could flag unusual activities, such as spikes in downloads or malicious code commits. This would enable timely intervention, enhancing the security and reliability of open-source AI projects.
- Promoting accountability mechanisms, such as community-driven reporting and moderation boards which could review flagged issues or concerns and maintain a transparent record of resolutions.
The group in its release notes OpenAI’s plans to release an open-source model and cites China’s DeepSeek open-source model as a development that “shocked the world.”
“The increasing use of these models are raising a number of concerns, especially related to cybersecurity threats such as data manipulation and model manipulation,” R Street says in the release.
Wong says in the paper, “Over the past two years, federal and state legislative efforts, along with industry-led initiatives, have already sought to establish clearer governance frameworks for responsible open source AI development. However, uncertainty remains over how best to govern these systems without undermining their role as drivers of U.S. innovation.”
She says, “This study identifies key policy priorities, emerging technological solutions, and best practices to ensure that open-source AI remains a force for our economic growth, global AI competitiveness, and national security.”
The study says “open source plays three major roles in advancing AI development: facilitating the creation of foundational datasets that fuel AI model training, providing the digital infrastructure necessary for collaborating on the refinement of AI systems, and democratizing access to AI resources that enable prototyping.”
It cites cybersecurity concerns over open-source models — which open-source advocates dispute — as well as a lack of “clarity” internationally over governance. It also examines pros and cons of closed-source systems, saying, “From a cybersecurity perspective, closed-source AI presents a paradox. While its restricted access reduces surface-level risks, such as tampering and unauthorized use, it can also create blind spots.”
“Furthermore,” the study says, “the inherently siloed nature of closed-source development may limit or even disincentivize opportunities for collaboration and information-sharing, impeding the creation of interoperable and resilient AI security frameworks.”
Wong writes “there is a broad consensus that the ideal approach moving forward would be to find a way to marry the distinct benefits that each approach offers. Most of the ongoing debate and outstanding challenges center around establishing what an integrated approach should look like, determining which solutions to prioritize, and crafting regulatory frameworks for effective management.”