R Street Institute analyst Adam Thierer offered strong support for the Klobuchar-Thune proposal because it builds on NIST’s AI risk management framework, which is based on a “widely agreed to set of principles for AI risk management,” he said at the hearing.

Thierer noted, “I wrote a paper about your bill, senator,” in response to a question by Klobuchar.

That paper, “Is AI Policy Comprise Possible?: A Look at the Thune-Klobuchar AI Bill,” charactered the proposal as seeking to break a “logjam” between targeted AI rules and sweeping regulatory proposals by offering “a novel approach to AI governance that could serve as the basis for workable compromise.”

Klobuchar said she is urging her colleagues on the Commerce Committee to take up the bill which was introduced in November.

Thierer said the proposal “takes on added importance” in the wake of President Biden’s sweeping AI order issued in October.

The Biden order “represents an everything-and-the-kitchen-sink approach to AI policy that opens the door to bureaucratic micromanagement of algorithmic systems. The EO empowers federal agencies to be far more aggressive in the oversight of AI markets in their respective fields,” Thierer wrote in his Dec. 4 analysis of the Klobuchar-Thune bill.

“By contrast, the Thune-Klobuchar approach relies on a more incremental, bottom-up, risk-based approach to AI governance,” Thierer wrote. It would “specifically build on the multi-stakeholder approach to AI oversight developed” by NIST through its AI risks management framework issued in January 2023.

“The bill distinguishes between ‘critical-impact AI systems’ — those that implicate critical infrastructure, criminal justice, national security or individuals’ biometric data — versus ‘high-impact’ AI systems, which are ‘developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual to housing, employment, credit, education, healthcare, or insurance in a manner that poses a significant risk to rights afforded under the Constitution of the United States or safety,’” according to Thierer.

The bill would “require critical-impact system creators to conduct a risk-management assessment and make it publicly available to the Department of Commerce 30 days before release and to provide updated risk assessments going forward,” Thierer wrote.

Also, the bill “lays out standards by which developers of such systems should use” a test, evaluation, verification and validation, or TEVV, process “to self-certify adherence to various best practices.”

“As noted, the TEVV approach has been pushed by NIST and endorsed by the U.S. Department of Defense for defense-related AI development but would be applied more broadly under AIRIA,” Thierer observed.

Yet the Klobuchar-Thune bill does present some challenges, Thierer said.

“While more flexible than other pending legislative proposals or Biden administration efforts, AIRIA still represents an expansion of federal AI regulation,” he wrote.

“An analyst with the Center for Data Innovation at the Information Technology and Innovation Foundation argues that AIRIA ‘is jumping the gun’ by ‘[r]ushing to establish AI standards without a clear understanding of the nuanced requirements in different sectors risks creating a framework that does not effectively address diverse contexts,’” according to Thierer.

But these challenges can be overcome, Thierer said, by narrowing the scope of high-impact rules and promoting continued use of voluntary standards like the NIST framework.

“These concerns could be addressed by narrowing the scope of what constitutes a critical-impact or high-impact AI system under AIRIA or by encouraging the Department of Commerce to use a more flexible soft-law approach to addressing these issues through continued refinement of voluntary best practices,” he wrote.

“Better yet, lawmakers should appreciate the benefits of continuing the same sectoral approach that has long guided tech policy, which relies on the many existing agency and court-based remedies that can address potential AI harms as they develop in various contexts,” he advised Congress.