Lawmakers in Europe and California are worried about that open-source AI is “dangerous.” On the contrary — there is nothing dangerous about transparency.

Artificial intelligence (AI) policy debates include many contentious issues, including one that has existed throughout the history of computing: the battle between open and closed-source systems. Today, this fault line has opened again, with lawmakers in California and Europe attempting to restrict “open-weights AI models.”

Open-weights models, like open source software before them, are publicly available systems that allow their underlying code to be inspected and modified by various parties for varied purposes. Some critics argue open-sourcing algorithmic models or systems is “uniquely dangerous” and should be restricted. However, arbitrary regulatory limitations on open-source AI systems would have serious downsides by limiting innovation, competition, and transparency.

This issue took on new relevance recently following important announcements from government and industry. First, on July 30, the Commerce Department issued a major report on such models, which was required by the AI executive order that President Joe Biden signed in October. The final report is mostly very welcoming of open-weight AI systems and “outlines a cautious yet optimistic path” for them. The report concludes that, “there is not sufficient evidence on the marginal risks of dual-use foundation models with widely available model weights to conclude that restrictions on model weights are currently appropriate, nor that restrictions will never be appropriate in the future.”

Read the full article here.