The R Street Institute’s Adam Thierer testified this week before the Joint Economic Committee (JEC) at a hearing titled “Artificial Intelligence and Its Potential to Fuel Economic Growth and Improve Governance.” This hearing is part of the JEC’s exploration of the ways in which artificial intelligence (AI) can improve the fiscal health of the United States both through economic growth and reduced costs.

Thierer’s written and oral testimony focused on three main points:

  1. AI and advanced computational technologies can help fuel broad-based economic growth and sectoral productivity while improving consumer health and welfare in important ways.
  1. To unlock these benefits, the United States must pursue a pro-innovation AI policy vision that can help bolster our global competitive advantage and geopolitical security.
  1. We can advance these goals through an “AI Opportunity Agenda” that includes a learning period moratorium on burdensome new AI regulations.

During the hearing, senators and representatives from both sides of the aisle agreed on the vast potential for AI to revolutionize numerous parts of the U.S. economy. Particular attention was paid to health care, where the witnesses and members agreed that this new technology has the ability to unlock more sophisticated and effective ways of performing medicine. From the creation of new treatments to improved diagnostics to reduced paperwork and administrative burdens for health care workers, AI has the potential to improve virtually every facet of our health care system.

When asked by Rep. David Schweikert (R-Ariz.) about the potential for AI to save the U.S. economy from increasingly difficult demographic trends, health care costs, and debt, Thierer noted, “It can certainly make a major contribution towards the betterment of our government processes and, potentially, our debt. There’s been various estimates, Congressman, on exactly how much AI could contribute to overall gross domestic product, the low end being something like at least 1.2 percent annually, but it goes up from there.” When Rep. Schweikert later asked whether advances in wearable technology like the Apple Watch had the potential to improve health in our country and how to sell that potential to consumers, Thierer explained:

Ye[s], absolutely. And to answer your question about how we essentially sell these benefits, we talk about it in terms of opportunity costs. What would we be losing, what kind of foregone innovation would we lose if we don’t get this right? Well, we can put our numbers on this. Let’s talk about some of the biggest killers in America today. Eight hundred thousand people lose their lives to heart disease; 600,000 people lose their lives to cancers every year now. I mean, how about cars? Let’s talk about public health and vehicles. Every single day there are 6,500 people injured on the roads in America—100 of them die, 94 percent of those are attributable to human error behind the wheel. I have to believe that if we had more autonomy in the automobile sector we could actually make a dent in that death toll. This is where we can talk to the public about the real world trade-offs at work if we get this wrong. We’ve had a 50-year war on cancer that goes back to the time when Richard Nixon was in office. And we’ve made some strides, but we can make a lot more if we had serious robust technological change to bring to bear on this through the form of computation and algorithmic learning. This is where we can make the most difference.

This testimony builds on Thierer’s “10 Principles to Guide AI Policy” for the Artificial Intelligence Task Force. One of the 10 principles Thierer articulates is that it is “important that lawmakers not demand that all AI systems be perfectly ‘explainable’ in terms of how they operate.” During the hearing, Rep. Don Beyer (D-Va.) questioned Thierer on this point, asking, “What are the limits of explainability? What can we as lawmakers really demand in terms of explainability?” Thierer responded, “Transparency is a good principle, but the question about how to mandate it by law is always tricky.” Instead of requiring that every aspect of an algorithm be explainable, Thierer recommended looking at the outcomes that algorithms produce to determine whether they need to be addressed. By focusing on outcomes rather than inputs, it is possible to prevent potential harms associated with algorithms without stifling innovation.

Another principle Thierer has articulated consistently is the need to ensure that AI policy remains rooted in a flexible, risk-based framework that relies on ongoing multistakeholder negotiations and evolutionary standards. During the hearing, Sen. Amy Klobuchar (D-Minn.) brought up the AI Research, Innovation, and Accountability Act, legislation she co-sponsored with Sen. John Thune (R-S.D.), which she said “takes a risk-based approach that recognizes different levels of regulation are appropriate for different uses of AI.” She then asked Thierer, “Do you agree that a risk-based approach to regulation is a good way to put in place some guardrails?” Thierer responded, “Yeah, absolutely. I love building on the [National Institute of Standards and Technology] framework because it was a multistakeholder, widely agreed-to set of principles for AI risk management. And so it’s really good to utilize the sort of existing regulatory infrastructure we already have and build on that first.”

Sen. Eric Schmitt (R-Mo.) focused his questioning on the race for AI superiority between the United States and China, noting, “The worst thing we could do in this race towards AI is stifle innovation by unleashing the bureaucrats and putting crippling regulations onto innovators. […] This would only serve to hamstring our innovation and give China the keys to this amazing technology. Mr. Thierer, what is it that we should be concerned about in this framework?”

Thierer responded:

As of noon today, there were 754 AI bills pending across the United States of America—642 of those bills are at the state level. That does not include all the city-based bills. […] The cumbersome nature of all those compliance rules added on top of each other—even if well-intentioned—could be enormously burdensome to AI innovators and entrepreneurs. The other thing to note is there have been discussions about the idea of overarching new bureaucracies or certain types of licensing schemes. I have no problem with existing licensing schemes as they apply in the narrow-focused areas where AI might be applied, whether it’s medicine, drones, or driverless cars. But an overarching new licensing regime for all things AI is going to be incredibly burdensome. That’s a European approach. We don’t want that.

We’re here on June 4th. It’s the 35th anniversary of the Tiananmen Square massacre. When we talk about the importance of getting this right for America and our global competitiveness, it’s important for exactly the reason you pointed out—if we don’t, and China succeeds, then they’re exporting their values, their surveillance systems, their censorship. The very fact that I just uttered Tiananmen Square at this hearing means this hearing won’t be seen in China. The bottom line is that means what’s at stake is geopolitical competitiveness, security, and our values as a nation. This is why we have to get it right. This hearing represented a breath of fresh air on the topic of AI. Members from both chambers and sides of the aisle agreed that AI has immense potential for good and that we must avoid stifling that potential through onerous regulatory schemes. By focusing on the benefits AI can bring, pursuing a pro-innovation framework to unlock those benefits, and reining in burdensome regulations that would stymie innovation, we can ensure our continued prosperity, geopolitical strength, and the health of our nation.

Sign up for the latest technology policy research, commentary, and events from the R Street Institute.