Artificial intelligence (AI) legislative proposals continue to multiply across the United States, with over 760 bills now pending—114 of which are federal bills. A recent R Street analysis examined some major state and local AI regulatory bills moving currently, including one that passed in Colorado in May. R Street also produced Fall 2023 and Spring 2024 AI legislative outlook updates discussing important federal AI bills under consideration.

Some federal AI bills propose a hybrid style of governance that would meld “hard law” (formal regulations) and “soft law” (informal, less-binding mechanisms). Soft-law tools and mechanisms include multi-stakeholder processes, voluntary best practices, industry standards, third-party oversight mechanisms, government guidance documents, and more. Soft law is an increasingly prevalent governance approach in digital technology because its processes can evolve rapidly and flexibly to address a variety of fast-moving tech policy concerns.

Sen. John Hickenlooper (D-Colo.) recently proposed a new hybrid AI governance measure. As with other leading AI bills floated in the U.S. Senate recently, the Validation and Evaluation for Trustworthy Artificial Intelligence Act (VET AI Act) would empower the National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce (DOC) to play a larger role in overseeing algorithmic systems by establishing “AI auditing” guidelines.

However, the VET AI Act only establishes voluntary guidelines while other bills propose giving the DOC some limited new forms of regulatory authority. While federal AI legislation is unlikely to be finalized due to a busy election year, these measures set the stage for the AI policy debate in the next session of Congress and foreshadow how the DOC could become America’s leading AI oversight body, with NIST at the center of the action.

The Rise of AI Auditing

Some technical and policy-related background about AI auditing is needed to understand the VET AI Act’s approach to AI governance. AI auditing and algorithmic impact assessments are governance tools attracting growing academic and policy interest today. These mechanisms can be used either before or after the deployment of an AI system to evaluate their performance against a variety of benchmarks. Depending on their structure, such audits and impact assessments could be administered voluntarily by system vendors, conducted by independent third parties, or required by government bodies. Several state and local AI-related legislative measures have proposed mandatory impact assessments or audits, including bills passed in Colorado and New York City.

The Biden administration has pushed auditing and impact assessments under the rubric of “AI assurance” or “AI accountability policy” in its 2022 Blueprint for an AI Bill of Rights as well as in a massive 110+ page AI executive order last October and a variety of statements. Of particular importance was a March report from the National Telecommunications and Information Administration (NTIA), another DOC division, which advises the president on information policy issues and spearheads other multi-stakeholder efforts on technology matters. The NTIA’s AI Accountability Policy Report backed the expanded use of AI audits but was vague about how they should be enforced: “We recommend that future federal AI policymaking not lean entirely on purely voluntary best practices,” the agency said. “Rather, some AI accountability measures should be required.” NTIA concluded that “work needs to be done to implement regulatory requirements for audits in some situations.”

Before this report launched, the head of the NTIA called for “a system of AI auditing from the government” and suggested the need for “an army of auditors” to ensure “algorithmic accountability.” The NTIA’s report also recommended a national registry of disclosable AI system audits, international coordination on “alignment of inspection regimes,” and “pre-release review and certification” of certain systems or models. This suggests a greater push to formalize AI audits and impact assessments.

The goal of these and many other policy proposals is to ensure safe or “responsible AI” prior to release of new algorithmic products. As previous R Street research has noted, however, AI audits or impact assessments imposed in an overly rigid fashion could stymie innovation by creating a paperwork-intensive compliance system that would be open-ended, costly, and potentially quite politicized. Auditing algorithms is highly subjective and nothing like auditing an accounting ledger. “When evaluating algorithms,” we noted, “there are no binary metrics that can quantify the scientifically correct amount of privacy, safety, or security in a given system.” Others have worried that AI regulations could be “weaponized” if government officials use them to jawbone developers, especially if such rules pressure developers to censor speech.

Best Practices, Not Mandates

Sen. Hickenlooper’s VET AI Act wisely does not mandate AI auditing. Instead of putting NIST in charge of enforcing a federal AI auditing regulatory regime, the bill instructs NIST to work with a variety of other agencies and stakeholders “to develop detailed specifications, guidelines, and recommendations for the certification of third-party evaluators to work with AI companies to provide robust independent external assurance and verification of their systems.” These voluntary auditing parameters would guide how AI system developers and deployers “conduct internal assurance and work with third parties on external assurance” to improve dataset quality and identify data privacy concerns or other potential harms. The bill also establishes a new advisory committee within NIST “to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.”

The VET AI Act comes on the heels of three other Senate legislative proposals that also envision a greater role for NIST in overseeing AI. These measures all build upon the AI Risk Management Framework (AI RMF), an iterative governance framework developed over time by NIST. NIST works with a wide array of stakeholders to develop voluntary, consensus-based standards for technical matters like cybersecurity, privacy, and now AI. Here is how those three other Senate bills would expand NIST’s role in overseeing AI policy:

In terms of breadth, Sen. Hickenlooper’s VET AI Act fits between the Warner-Blackburn bill and the Cantwell-Young bill but is not as restrictive as the Thune-Klobuchar bill. Importantly, however, Sen. Hickenlooper is a co-sponsor of the Cantwell-Young and Thune-Klobuchar bills, and his new VET AI Act could build on both of them by using their AI safety standards and practices as benchmarks for future AI audits.

The Commerce Department’s Growing Role in AI Policy

Smartly, these four bills do not propose new broad-based AI licensing schemes or new technocratic AI bureaucracies. Such mandates or bureaucracies would be costly and counterproductive in practice, generating considerable opposition and protracted political battles. The VET AI Act and the three other Senate measures have a better chance of generating legislative consensus because they all build on an existing agency and the NIST AI RMF as the foundation of collaborative best standards.

However, these bills raise the question of whether NIST and the DOC should receive quasi-regulatory powers to design and enforce AI audits or other algorithmic oversight policies. NIST and NTIA multi-stakeholder efforts and standards gained widespread acceptance because they are collaborative, iterative, and voluntary. If new laws formalize this process and give it more enforcement teeth, it could make the system more political and less flexible over time. In other words, efforts to move soft-law processes in a hard-law direction could derail the benefits of those more flexible governance mechanisms.

How Recent Supreme Court Decisions Play Into This

The Supreme Court recently handed down two decisions—Loper Bright v. Raimondo and Murthy v. Missouri—that could have a bearing on how these bills and soft-law AI governance play out. Loper Bright overturned so-called “Chevron deference,” a standard of judicial review that left great leeway to agencies when interpreting and enforcing statutes. Courts will now hold agencies to a higher standard to ensure their actions more closely align with congressional intent.

The Murthy decision cut the other way by rejecting a claim that government efforts to jawbone social media platforms violated the First Amendment. While the Court’s decision was based on lack of standing and could still be taken up again later on substantive grounds, the short-term effect of Murthy is that government officials still have broad leeway to jawbone companies and encourage them to change behavior in various ways without any rules being passed.

The combined effect of Loper Bright and Murthy could be that some federal agencies, including the DOC, will lean on soft-law governance mechanisms to an even greater extent. Again, while the four Senate bills discussed above all push for the continued development of voluntary best practices for AI safety, they also envision an expanded role for government in helping to formulate and steer those policies. This leaves considerable policy discretion to NIST and the NTIA to determine the scope and nature of AI safety standards.

The director of NIST’s new U.S. AI Safety Institute, who previously served at the White House, recently said that the Institute is already “building out a suite of evaluations” and will “be sharing feedback with model developers on where mitigations may be needed prior to deployments.” She noted that the forthcoming guidance and benchmarks will look beyond just safety and security matters and that the agency is “going to be looking at societal harm perpetuated by frontier models and systems.”

To reiterate, neither NIST nor NTIA possess any formal authority to regulate private AI systems in the way the Federal Communications Commission has the power to license or regulate certain telecommunications or media technologies. But the potential exists for NIST and NTIA to pursue backdoor AI regulation while the DOC emerges as America’s de facto AI bureau. For better or worse, the VET AI Act and other proposed Senate bills would solidify and extend the agency’s power over algorithmic systems and let it steer developer behavior through amorphous soft-law policies. Lawmakers would be wise to limit the agency’s discretion over algorithmic systems and ensure it remains as flexible and voluntary as possible.  

Tierney

Artificial Intelligence

America does not need a convoluted new regulatory bureaucracy or thicket of new rules for AI. We are on the cusp of untold advances in nearly every field thanks to AI. Our success depends on using flexible governance and practical solutions to avoid diminishing the pro-innovation model central to U.S. success in the technology sector.

R Street’s work on artificial intelligence cuts across multiple of our issue areas – technology policy, cybersecurity policy, electoral policy, energy and environmental policy, and more.