While some artificial intelligence (AI) critics want to pause AI development, the pause most needed today is on overzealous regulatory proposals that could kneecap America’s lead in computational science and algorithmic technologies. With over 700 federal and state AI legislative proposals threatening to drown AI innovators in a tsunami of red tape, Congress should consider adopting a “learning period” moratorium that would limit burdensome new federal AI mandates as well as the looming patchwork of inconsistent state and local laws.

The time to do so is now, with the race for AI supremacy against China intensifying and other nations investing heavily to counter the United States. Handcuffing our AI innovators with layers of red tape would diminish domestic entrepreneurialism and investment, deny citizens many life-enriching innovations, and limit economic growth. Equally worrisome is how overregulation could undermine our technology base and potentially even our national security.

Mountains of Red Tape

Unfortunately, many lawmakers seem oblivious to these dangers, floating extreme AI proposals premised on far-fetched hypotheticals and dystopian sci-fi plots. Such fear-based thinking has led states to propose far-reaching controls on algorithmic technologies. Colorado just became the first state to advance a comprehensive AI regulatory measure, which Gov. Jared Polis (D) signed even though he worried state regulations like his could create “a complex compliance regime for all developers and deployers of AI” and a patchwork of mandates that will “tamper innovation and deter competition.” California is also rapidly advancing a major bill that would impose onerous restrictions on “frontier” AI models and create a new bureaucracy to administer the rules.

Overregulation also looms at the federal level, with more than 100 AI-related measures pending in Congress. The Biden administration is simultaneously pursuing unilateral regulation on AI through its “Blueprint for an AI Bill of Rights,” a massive 110+ page executive order, and a litany of new agency directives premised on vague notions of “algorithmic fairness.”

Most of these efforts are premised on the notion that government can preemptively legislate “responsible AI” by forcing innovators to run new ideas through a maze of bureaucrats to get a permission slip before innovating. Earlier this year, a top Biden administration tech official called for “a system of AI auditing from the government,” and suggested the need for “an army of auditors” to ensure “algorithmic accountability.” The resulting layers of technocratic meddling could lead to a death-by-a-thousand-cuts scenario for AI developers.

Undermining a Winning Formula

This is the exact opposite of the more flexible, market-driven approach the Clinton administration and Congress wisely crafted in the 1990s for the internet, digital commerce, and online speech. Rooted in policy restraint, that framework protected the freedom to innovate without first needing some bureaucrat’s blessing to launch the next great application or speech platform.

If American innovators and values are to shape today’s most important technology, we must not shoot ourselves in the foot as the global AI race heats up. Congress should pause overzealous micromanagement before it is too late. In the past, lawmakers have used forbearance requirements and moratoriums to protect innovation and competition, albeit to varying effect.

The Telecommunications Act of 1996 specified that “[n]o State or local statute or regulation, or other State or local legal requirement, may prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunications service.” The law included other specific preemptions of state and local regulation, as well as a provision requiring the Federal Communications Commission (FCC) and state regulators to forbear from regulating in certain instances to enhance competition.

Another portion of the Communications Act meant to “encourage the provision of new technologies and services to the public” specifies that any party who opposes innovations “shall have the burden to demonstrate that such proposal is inconsistent with the public interest” and forces the FCC to make a decision within a year. Sadly, the FCC mostly ignores both this provision and the Telecom Act’s forbearance requirements, continuing to overregulate communications and media markets instead.

Federal moratoria have been more effective in protecting new technologies from bureaucratic meddling and excessive taxes. Congress passed the Internet Tax Freedom Act of 1998 (made permanent in 2016) to contain the spread of “multiple and discriminatory taxes on electronic commerce” and internet access. Similarly, the Commercial Space Launch Amendments Act of 2004 made sure federal regulators did not undermine the nascent market for commercial human spaceflight.

https://stock.adobe.com/contributor/207384409/videoflow?load_type=author&prev_url=detail

Artificial Intelligence Policy

America does not need a convoluted new regulatory bureaucracy or thicket of new rules for AI. 

How to Structure an AI Moratorium and Preemption

These and other laws could provide a template for how to craft a moratorium or preemption for AI regulation. An AI learning period moratorium should block the establishment of any new general-purpose AI regulatory bureaucracy, disallow new licensing schemes, block open-ended algorithmic liability, and preempt confusing state and local regulatory enactments that interfere with the establishment of a competitive national marketplace in advanced algorithmic services.

An AI learning period moratorium would have many benefits. First, it would create breathing space for new types of algorithmic innovation to grow. This is especially important for smaller AI firms and the open-source AI marketplace, both of which could be decimated by premature overregulation of a still-developing sector.

Second, an AI regulatory moratorium would give policymakers and technology experts the chance to determine what problems deserve greater scrutiny and potential regulation. This pragmatic policy approach would limit damage from rash decisions and help us gain knowledge by testing predictions and policies before advancing new rules.

A learning period moratorium on new AI regulations does not mean zero regulation, however. Many existing laws and regulations already cover any AI-enabled practices that violate civil rights, consumer protections, the environment, intellectual property, and national security. Policymakers can still enforce those policies where harms exist and fill gaps as necessary, or they can use less restrictive approaches like transparency and education-based measures.

A federal AI preemption standard will need to include carve-outs for some areas of traditional state authority including education, insurance, and law enforcement. But regulatory preemption will be challenging because, as the “most important general-purpose technology of our era,” AI touches almost every field. For better or worse, some sectors and issues must be left to the province of state and local governments.  

Where a national framework proves untenable, state and local governments should craft harmonized light-touch frameworks—perhaps in the form of multistate compacts—to avoid burdening the development of a robustly competitive and innovative national marketplace in AI firms and technologies.

Review Existing Regulatory Capacity

When formulating an AI moratorium, Congress should simultaneously demand that our government’s 439 federal departments be required to do two other things. First, agencies should study and review existing policies that might already address algorithmic innovation in their field and consider how AI systems might already be overregulated under current law. Second, agencies should identify additional ways in which AI technologies might help improve government services. (It would be wise for state and local governments to engage in a similar review, although it need not be mandated by federal law).

The Trump administration’s Office of Management and Budget (OMB) recommended some of these ideas to agency heads in a November 2020 guidance memo. “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” the OMB memo ordered. “Fostering AI innovation and growth through forbearing from new regulation may be appropriate,” and “agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.”

Unfortunately, in the wake of recent Biden administration orders and statements, agencies have instead been encouraged to consider how to expand their regulatory ambitions toward AI, even though Congress has not authorized such actions.  

Conclusion

For the United States to remain the global leader in algorithmic technologies and computational capabilities, AI policy must be rooted in patience and humility rather than a rush to overregulate. Policymakers must avoid locking down America’s innovative potential and instead pause the panic-based AI regulatory policies under consideration today.

It is essential that our nation get the policy prerequisites of growth and prosperity right by once again embracing an innovation culture that positions us as the global leader in advanced computation as the next great technological race with China and the rest of the world heats up.