A detailed report from the R Street Institute spells out policy approaches for secure development and deployment of open-source artificial intelligence systems, which the free-market think tank says is “indispensable” to U.S. leadership in global tech competition.

“If open-source AI is supported and guided intelligently, America has a unique opportunity to channel its culture of innovation and entrepreneurship into a force that strengthens national security, drives economic growth, and solidifies its position as a global leader,” according to the report by R Street resident fellow Haiman Wong, released April 17.

R Street describes itself as focused on “pragmatic” solutions to “complex public policy challenges through free markets and limited, effective government.”

Among its recommendations, according to a release, R Street calls for:

The group in its release notes OpenAI’s plans to release an open-source model and cites China’s DeepSeek open-source model as a development that “shocked the world.”

“The increasing use of these models are raising a number of concerns, especially related to cybersecurity threats such as data manipulation and model manipulation,” R Street says in the release.

Wong says in the paper, “Over the past two years, federal and state legislative efforts, along with industry-led initiatives, have already sought to establish clearer governance frameworks for responsible open source AI development. However, uncertainty remains over how best to govern these systems without undermining their role as drivers of U.S. innovation.”

She says, “This study identifies key policy priorities, emerging technological solutions, and best practices to ensure that open-source AI remains a force for our economic growth, global AI competitiveness, and national security.”

The study says “open source plays three major roles in advancing AI development: facilitating the creation of foundational datasets that fuel AI model training, providing the digital infrastructure necessary for collaborating on the refinement of AI systems, and democratizing access to AI resources that enable prototyping.”

It cites cybersecurity concerns over open-source models — which open-source advocates dispute — as well as a lack of “clarity” internationally over governance. It also examines pros and cons of closed-source systems, saying, “From a cybersecurity perspective, closed-source AI presents a paradox. While its restricted access reduces surface-level risks, such as tampering and unauthorized use, it can also create blind spots.”

“Furthermore,” the study says, “the inherently siloed nature of closed-source development may limit or even disincentivize opportunities for collaboration and information-sharing, impeding the creation of interoperable and resilient AI security frameworks.”

Wong writes “there is a broad consensus that the ideal approach moving forward would be to find a way to marry the distinct benefits that each approach offers. Most of the ongoing debate and outstanding challenges center around establishing what an integrated approach should look like, determining which solutions to prioritize, and crafting regulatory frameworks for effective management.”