America is currently experiencing an artificial intelligence (AI)-fueled investment boom, with data center construction alone at a record-high rate of $28.6 billion a year, which is “roughly as much as America spends on restaurant, bar, and retail store construction combined.” Altogether, the past decade has seen a stunning $335 billion private sector-led explosion in AI investment—more than three times what China invested. Meanwhile, AI-enabled systems are starting to improve human health by revolutionizing drug discovery and helping society address the major causes of suffering and death more effectively.

These extraordinary AI-enabled investments and life-enriching innovations are happening because the United States has made freedom and flexibility the basis of digital technology policy. At least thus far, American policymakers have rejected the sort of top-down, bureaucratic mandates adopted by the European Union (EU), which have decimated their tech ecosystem. One journal recently labeled Europe “The Biggest Loser” when it comes to global digital innovation. Six of the world’s seven trillion-dollar companies are American, and all of them are major players in AI development. Europe has no major AI players in the top 25. European over-regulation is often cited as the primary cause of this technological malaise.

Unfortunately, some American policymakers and regulatory advocates are looking to adopt elements of the disastrous European regulatory playbook for AI, which would quickly halt innovation here, too. Across the nation, a rising tide of nearly 750 AI-related state legislative measures now threatens to encumber AI-enabled innovations and speech with layers of new mandates and compliance burdens.

This troubling trend continues with the “Texas Responsible AI Governance Act,” a sweeping new legislative proposal introduced by Rep. Giovanni Capriglione (R). This bill and other regulatory proposals like it threaten to derail the AI revolution just as intense competition with China and other nations gets underway in this crucial technological arena. Texas and other states would be wise to consider a more pro-freedom, pro-innovation model for AI policy that rejects fear-based policymaking, choosing instead to embrace flexible remedies and the freedom to innovate.

The Freedom to Innovate Under Threat

A quarter-century ago, federal lawmakers crafted a bipartisan national framework for internet commerce and online speech that unleashed an American-led digital revolution and an explosion of economic and social output. Sadly, those days appear to be ending, with a rising “techlash” producing widespread calls for comprehensive technology controls, especially on AI-related innovations.

State AI regulatory proposals address a wide array of concerns. While some measures seek only to address the impact algorithmic systems might have on specific issues or sectors, many states are introducing broad-based bills that seek to comprehensively regulate AI systems either before or during development and/or deployment. What unifies these bills is a desire for technology control through preemptive restrictions and new bureaucracies. Regulation premised on such “precautionary principle” thinking treats algorithmic innovations as guilty until proven innocent and requires innovators to either seek the equivalent of permission slips from bureaucratic authorities or face onerous new penalties or types of liability. Such bills would create complex and costly new compliance burdens that would undermine AI competition and innovation, especially among small and mid-sized firms. 

Several states have considered such arduous regulatory measures so far, and Texas might follow with its new draft AI bill. In May, Colorado passed a new AI law (SB24-205) that made it the first state to impose comprehensive new AI regulation. When signing the measure into law, however, Gov. Jared Polis (D) expressed concern about “the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.” Polis even asked Congress to preempt the bill, calling for “a needed cohesive federal approach” to AI policy that has not yet materialized. Polis signed the measure despite these reservations, giving other states a model to build on. Connecticut came close to enacting similar legislation but ran out of time during its legislative session.

Other states are considering similar models with help from a new Multistate AI Policymaker Working Group coordinated by the Future of Privacy Forum. The private organization will convene over 200 state lawmakers from more than 45 states to help them formulate more regulation of this variety. Capriglione serves on the steering committee for this effort, and his bill shares much in common with Colorado’s regulatory model.

Texas-Sized AI Mandates

Much like the Colorado and Connecticut bills, the Texas draft legislation is preoccupied with trying to eliminate any possibility of “algorithmic discrimination,” especially for “high-risk” AI applications. To be clear, discrimination is already flatly illegal under Texas and federal statutes, and if any AI innovator were to engage in activities that violate civil rights or other consumer protections, a plethora of laws, regulations, and court-based remedies exist to address those harms.

But these proposed AI legislative measures presume that algorithmic discrimination is ubiquitous and demands preemptive regulatory steps to ensure it can never manifest in any fashion. This means various new ex ante compliance requirements will limit AI innovation based on fears of hypothetical worst-case scenarios. The EU calls such mandates “prior conformity assessments,” signaling how European officials expect all hypothetical problems to be addressed before innovation can take place. Next, the EU creates a remarkably expansive list of “high-risk” AI applications for which such conformity assessments are required. This sort of precautionary principle-based regulatory model is precisely what got Europe into the trouble they find themselves in today (lack of innovation and major tech companies); yet this is generally the same model that states like Colorado, Connecticut, and Texas now propose for America.

The expansive definitions found in these measures create enormous regulatory uncertainty. Under the draft Texas AI law, “high-risk” AI systems are those that represent a “contributing factor” in the making of a “consequential decision.” Each term is open to considerable interpretation. “Consequential decisions” are actions that have “a material legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of” various services, including food, financial services, electrical and water service, legal services, housing insurance, and health care, among many others. The law would also create a new Artificial Intelligence Council with the broad goal of ensuring that AI development in the state is “safe, ethical, and in the public interest.” None of those terms are defined, leaving bureaucrats to interpret them.

With this regulatory infrastructure in place, AI developers would be required to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses” of their systems. They would also need to create mandatory “High-Risk Reports” to identify any such theoretical risks and disclose an extensive amount of information about how their systems were created, among a variety of other disclosures and reporting requirements.

Deployers of such systems would be required to carry out impact assessments both before systems are launched and then after they are modified. Annual reviews are also mandated to ensure the system “is not causing algorithmic discrimination.” There are other requirements related to algorithms that could be “distorting the behavior of a person,” as well as bans on “social scoring” (i.e., programs that might be used to classify people according to their behaviors) and limits on biometric identifiers. The law would also impose a variety of new data collection disclosure requirements and use limitations, including opt-out rules for targeted advertising. The measure also empowers the Texas attorney general with new investigatory powers to evaluate AI systems and impose fines up to $100,000 per violation of provisions of the new regulatory scheme.

There are many other confusing elements in the draft bill. For example, it claims that it does not apply to open-source AI systems—with the caveat that is only true so long as “the developer has taken reasonable steps to ensure that the system cannot be used as a high-risk artificial intelligence system without substantial modifications.” Of course, by their very nature, once open-source systems are released into the wild, a developer can make no ironclad guarantees about how they might be modified later. This confusion could undermine open-source systems, which play an important role in boosting AI innovation and competition.

Taken together, the open-ended regulatory requirements contained in the Texas AI bill would create ongoing regulatory fishing expeditions as bureaucrats, trial lawyers, and anti-innovation activists use these ambiguous provisions to encumber AI innovators with a litany of roadblocks and slow the pace of AI innovation in the state. That sort of precautionary principle-based policymaking is a recipe for technological stagnation.  

The superior approach to AI policy is to ensure developers have the widest latitude possible to bring innovative products to market while holding them accountable when they run afoul of time-tested legal standards that protect consumers against harm and discrimination. The most important principle for AI regulation is that policy should address actual algorithmic outputs and outcomes (e.g., system performance), not hypothetical worst-case scenarios of what might go wrong based on initial system design.

Don’t Mess with Texas Innovation

Texas still has time to rethink its approach to AI policy. Instead of layering on so many confusing and costly new regulatory mandates, Texas should work toward becoming a leader in state-level AI policy as well as a haven for entrepreneurs looking to build the next generation of emerging technologies. In recent years, many leading tech innovators and investors have fled California in unprecedented numbers to escape high-tax, over-regulatory policies. The state has been on the winning end of what some call “the hottest trend in business”—a so-called “tech exodus” of Californian firms and innovators like Elon Musk who are moving to Texas to enjoy greater entrepreneurial freedom. Measures like the “Responsible AI Governance Act” would be a setback for the state and may result in some innovators relocating elsewhere.

A better model for state AI policy is a recently implemented Utah law (S.B. 149) that serves as the basis of a Model State AI Act developed by the American Legislative Exchange Council (ALEC). The ALEC bill embraces AI as a great opportunity instead of trying to box it in with regulation, focusing on flexible responses to AI policy concerns and ongoing experimentation with different regulatory approaches. The bill also requires inventories to delineate how AI technologies are used within the state and to “identify regulatory barriers to AI development, deployment, and use,” while pinpointing any “regulatory gaps where existing law is insufficient to prevent” potential harms. A “Learning Laboratory” program encourages AI innovators to work with state officials to consider new regulatory approaches by creating partnerships that mitigate risks through so-called “regulatory mitigation agreements.” This is essentially a “sandbox” within which policy experimentation can happen.

Texas’ new draft AI bill does include some pro-innovation elements, including a sandbox program to promote the innovative use of AI systems in certain sectors that are heavily regulated already (e.g., health care, finance, education, public services). This would allow firms to experiment with new offerings that might otherwise be limited by existing rules. Moreover, the Texas draft measure includes a new “Workforce Development Grant Program” that will use grants and partnerships to help develop a more skilled workforce in AI and related fields like robotics and data science. These are more sensible policy steps for Texas lawmakers to build on.

But the Lone Star State, which treasures its independence and differing views on issues, should not tie itself to regulatory models devised in more pro-regulatory states and under EU directives. Instead, Texas should blaze a better trail toward AI opportunity and innovation in America. 

Follow our artificial intelligence policy work.