The number of state and local artificial intelligence (AI) laws introduced in the United States has been exploding recently, and now Colorado has passed a major bill that could open the floodgates to AI over-regulation across the nation. Meanwhile, Congress has done nothing to address this looming patchwork of confusing parochial policies, which could significantly undermine algorithmic innovation, investment, and competition.

On May 17, Colorado Gov. Jared Polis (D) signed into law SB24-205, a bill “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems.” The measure, which will take effect in 2026, aims to preemptively identify and eliminate the potential for “algorithmic discrimination” in some AI systems through costly new compliance mechanisms.

In a remarkable signing statement that read more like it could have been a veto notice, Polis detailed the many problems with the new mandates. While Polis’ statement seems to include more reservations than the average Hollywood prenuptial agreement, it is a cogent and accurate assessment of the burdens that Colorado’s new AI law will create. Other lawmakers should heed his warning about the dangers of the sort of over-regulation contained in the new Colorado measure.

The Governor Indicts His Own Law

Polis began by noting that, while it is usually the case that most laws seeking to prevent discrimination focus on intentional efforts to do so, “this bill deviates from that practice by regulating the results of AI systems use, regardless of intent,” and he strongly encourages the Colorado Legislature to reevaluate the wisdom of that decision before the law takes effect.

This is an astute observation in two senses. First, plenty of laws and regulations already exist that govern discrimination in various forms and contexts. Public policy toward AI should first look to tap existing agencies, rules, and court-based remedies to address problems that might develop instead of creating all-new layers of unnecessary bureaucracy and red tape.

Second, the focus of AI oversight should be on finding and addressing direct examples of systems being used to discriminate in the real-world. As other policymakers have observed, the most important principle for regulating AI is to focus on actual algorithmic outputs and outcomes, not the underlying models or systems themselves. Unfortunately, as experts noted before the measure passed, the Colorado law will lead to a bureaucratic fishing expedition in search of algorithmic discrimination.

Polis further explained that the new law would “create a complex compliance regime for all developers and deployers of AI” through “significant, affirmative reporting requirements,” which include various types of compliance hassles for innovators in this space. He correctly points out that open-ended mandates like this will result in major costs on innovation, investment, and competition: “And while the guardrails, long timeline for implementation and limitations contained in the final version are adequate for me to sign this legislation today, I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.” (Emphasis added.)

Polis is right to be concerned. The Colorado measure will hit small AI providers, especially open-source innovators, particularly hard as they will struggle to comply with mountains of new red tape, such as mandatory impact assessments. Before the legislature passed the bill, a group of smaller AI developers sent a letter to Colorado lawmakers noting how the measure “would severely stifle innovation and impose untenable burdens on Colorado’s businesses, particularly startups.”

Polis Petitions for Preemption

There is little doubt that the Colorado law will spur copycat measures in other state legislatures, and many states already have attempted to advance major bills. A previous R Street analysis identified some of the most problematic measures introduced in other states, including California, Connecticut, and Hawai’i. While these laws vary in nature, almost every state is now moving forward with some sort of AI-related measure, creating the prospect of a patchwork of conflicting policies.

The question now is whether federal lawmakers are prepared to do anything about it. Perhaps the most remarkable aspect of the Polis signing statement was his plea to Congress to do something about this mess. He called for “a needed cohesive federal approach” that is “applied by the federal government to limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines along with ensuring access to life-saving and money-saving AI technologies for consumers.”

Will America Get Its AI Innovation Culture Right?

The principles Polis displays in his letter perfectly encapsulate the sort of “innovation culture” that America desperately needs right now. Moreover, his vision is perfectly in line with the winning bipartisan freedom-to-innovate vision of the mid-1990s. That policy framework, which largely preempted state and local regulation of computing and online commerce, has kept the United States on the cutting edge of global digital innovation ever since, and it helped fuel remarkable economic growth.

This sort of pro-innovation national policy vision is completely missing today for AI. Congress needs to reorient its AI policy priorities before the United States loses ground to China and other nations who are looking to catch up in the race for global AI supremacy. A confusing and costly patchwork of heavy-handed compliance policies will kneecap America’s ability to maintain a global competitive advantage in algorithmic and computational capabilities.