A Georgia Senate study committee recently embarked on an odyssey that would have seemed like a piece of science fiction only a handful of years ago: investigating artificial intelligence. The committee is tasked with recommending how to define and regulate the rapidly emerging technology, but the members’ charge is easier said than done.

Creating a regulatory framework for a technology that most Americans scarcely understand and is regularly taking quantum leaps forward will take a delicate balancing act. AI is still in its infancy, but it promises to improve our quality of life and provide a bevy of benefits. Put simply, government regulation shouldn’t unreasonably stymie AI development.

Fortunately, the study committee appears to be in good hands. Sen. John Albers, R-Roswell, chairs the committee. He has a reputation for pragmatism and has worked on many technology policy issues. Another member—Sen. Ed Setzler, R-Acworth—is the chairman of the Science and Technology committee and studied physics in college. In short, expect this committee to delve into AI’s minutiae.

At the inaugural hearing, Albers set the tone and explained: “We celebrate in Georgia being the number one place to do business for over a decade right now. I will tell you that I believe the only way we will stay the number one place for business is if we are going to be the number one place for artificial intelligence in the future as well.”

AI is already supporting businesses. According to a Massachusetts Institute of Technology report, the majority of large businesses with over 5,000 employees already use AI in many ways, including for customer service, inventory and supply chain management and so forth. AI has already advanced beyond these humble applications. It will prove indispensable in advanced robotics, driverless cars, the medical field and other ways, and AI’s decision-making algorithms are already impressing users.

A Georgia State University investigation determined in a limited study that people find AI more moral than humans. “A new study has found that when people are presented with two answers to an ethical question, most will think the answer from artificial intelligence (AI) is better than the response from another person,” reads a recent university article.

AI is advancing in other ways too. NBC announced that it will use artificial intelligence to recreate sports broadcaster Al Michaels’ voice to announce the Paris Olympics’ highlights. Meanwhile, the U.S. Air Force is testing AI fighter jets as America strives to remain on the cutting edge of military technology. This is just scratching the surface of what AI will be able to do.

“The opportunities in front of us will cure some of the world’s greatest issues and crises,” Albers said in committee. “I believe this will literally cure cancer and have breakthrough evolutions on helping people throughout the globe. However, it also has the propensity to do great harm.” This explains the Senate’s justification for the study committee.

There are some who want to regulate AI into oblivion, but that doesn’t appear to be Albers’ goal. Even so, experts suggest that the best form of AI regulation would come from the federal government. Otherwise, technology developers must contend with a patchwork of confusing and conflicting laws from 50 different states, but as we all know, Congress is mired in perpetual dysfunction.

While AI applications are already regulated by a massive body of federal, state, local and court-based laws, some states have taken the initiative in light of Congress’ perceived inaction. If Peach State lawmakers believe they must also, then they ought to keep some guiding principles—outlined by my R Street Institute colleague Adam Thierer—in mind. First and foremost, they should adopt the least restrictive manner to safely regulate AI outcomes and foster technological growth, and the “freedom to innovate” ought to be the statutory standard.

Beyond this, lawmakers shouldn’t attempt to regulate AI in a vacuum, and given Albers’ study committee and policy on accepting testimony, they seem amenable to considering the opinions of stakeholders and technology companies from across the spectrum. They should be viewed as critical partners. Moreover, legislators should focus on using existing laws and court remedies to address AI concerns and eschew one-size-fits-all regulatory frameworks. AI varies greatly by application and sector, and regulation tailored for each sector will prove the most beneficial.

Again, many experts agree that a consistent, light-handed regulatory framework across all 50 states is the best approach, but considering that Congress is generally paralyzed by inaction, states will inevitably take the lead. As Georgia lawmakers consider ways to regulate AI, they should remember that while AI may feel like science fiction and will be a paradigm-shifting innovation, it is just another technology that will ultimately benefit humanity. However, overregulating it risks frustrating further development and allowing China to surpass the U.S. as a leader in AI.