Georgia’s primary elections might be over, but the campaign season and all of its annoyances, mud-slinging and—worst of all—interference have just begun.

The FBI warned that Russia, China and Iran may attempt to meddle in the upcoming American elections, and voters ought to gird themselves for a new wave of illicit outside attempts to influence electoral outcomes.

Yet this could go far beyond the so-called Russian “troll farms” that shared fake and disparaging news and engaged on social media in bad faith back in 2016.

“The FBI is concerned that foreign adversaries could deploy artificial intelligence as a means to interfere in American elections and spread disinformation,” according to the Associated Press. Artificial intelligence represents humanity’s next technological leap, and in fact, we are in the midst of it. AI technology is rapidly advancing, and if the government doesn’t regulate it into oblivion, it promises to greatly benefit our lives.

When fully developed, it may assist with research, medical care, operate autonomous vehicles and so forth, but like any tool, it can be misused. The FBI is apparently concerned that bad actors may use it to create what’s known as “deepfakes.” These are realistic AI-generated, digitized likenesses of real people, but, as the name suggests, they are fake.

Earlier this year, Reps. Brad Thomas, R-Holly Springs, and Todd Jones, R-South Forsyth, worked together to demonstrate the risks of deepfakes in a Senate Judiciary hearing. With all of the appropriate disclaimers, Thomas played a video, which he and Jones had created using inexpensive software, that seemed to show Jones and audio of Sen. Colton Moore, R-Trenton, speaking—only it wasn’t them. They were AI-generated deepfakes.

In all honesty, the video looked like Jones, but it was pretty clear that it was a digital recreation. The audio of Moore, however, was shocking. It sounded like him and even had a similar speaking cadence. Even more concerning is that the software Thomas and Jones used wasn’t the most sophisticated by far. There are more advanced programs available, and the FBI is worried that foreign adversaries might exploit them to wreak havoc on our elections and undermine faith in them.

Can you imagine if someone created and shared a believable deepfake video of Donald Trump admitting to a host of unconscionable crimes on the eve of the election? How about one of President Joe Biden stating that he’s dropping out of the race due to health concerns and endorsing Robert F. Kennedy Jr.? Most of us would probably be skeptical, but some deepfakes are incredibly compelling. What’s more, foreign adversaries might create more plausible scenarios to trick voters.

During this past state legislative session, Thomas introduced legislation (HB 986), which would have made it a felony to create and use deepfakes of political candidates for the purposes of misleading voters. Ultimately, the bill failed to reach Gov. Kemp’s desk, but even had it advanced, it would have done little to tame foreign adversaries that don’t fall under Georgia’s jurisdiction.

The truth is that policing deepfakes—and any form of disinformation—from foreign actors is incredibly difficult to do, as is rooting out falsehoods spreading on social media. Searching for a government solution to these problems gets tricky very quickly, given our constitutional right to free speech and the government’s many shortcomings. I doubt that empowering the government to launch an Orwellian ministry of truth to vet all speech, social media posts and videos to determine what is appropriate would turn out well for Americans.

Instead, the antidote to deepfakes and similar forms of organized election disinformation is relatively simple. If you see, hear or read anything that seems fishy don’t immediately share it. First, verify it by visiting official campaign websites, candidates’ social media accounts or traditional news sources. I understand that many news outlets are increasingly under fire for supposedly being “fake news” or promoting biased reporting.

To be fair, every person—journalist or otherwise—has biases, and some are obviously better than others at promoting objective reporting. That doesn’t mean that serious journalists are dishonest about the underlying facts, but if you’re still skeptical of them, then verify questionable information by visiting news outlets that skew to the left as well as to the right. Then you’ll get your answer.

It remains to be seen what kind of influence deepfakes will pose on the upcoming elections. After all, a study between January 21 and May 8 only found three notable election-related incidents of AI-use, and they weren’t particularly effective. Nevertheless, such deepfakes may soon become increasingly common and easy to find, but thanks to free speech, an engaged press and an open internet, it may be even easier to debunk them. In the meantime, brace yourselves. Election season is going to be really annoying.