Louisiana is one of eight states that make up the southern boundary of the continental United States. Aside from America’s fourth longest federal highway (I-10) running through them, these states share very few geographic, cultural, or political characteristics. Nevertheless, they were recently on the verge of aligning on one of 2024’s hottest policy topics—that is, until Gov. Jeff Landry (R-La.) vetoed two bills that would have restricted the use of artificial intelligence (AI) and deepfakes in certain election communications. Had these bills been signed into law, Louisiana would have been the eighth in the bunch (and 18th nationwide) to implement such restrictions. While completing the eight-state chain of agreement on this issue would have been nice from a useless trivia perspective, the governor made the right call on the important matter of protecting free speech in the Pelican State.

Lawmakers nationwide are grappling with how to handle the proliferation of highly realistic deepfakes and other forms of disinformation used to deceive voters, which can be generated cheaply and easily using AI technology. Seventeen states now have laws on the books to regulate these deceptive communications, with most requiring them to include a disclosure stating they are AI-generated and do not reflect reality. Two states—Texas and Minnesota—go a step further by banning the use of deepfake videos for election-related purposes.

The bills vetoed by Gov. Landry would have established both types of restrictions in Louisiana, with SB97 requiring disclosure when AI is used to manipulate an election-related communication and HB154 imposing a ban on the distribution of false imagery, audio, or video intended to deceive voters or harm a candidate. Ultimately, the governor’s veto messages expressed concern that the bills may violate the First Amendment and indicated a preference for further research before rushing to regulate the emerging technology.

HB154 would have been especially vulnerable to legal challenge, as the prohibition approach appears to stand on shaky legal footing in general. In fact, Texas’ 2019 ban was struck down on appeal as an overly broad violation of free speech rights. It is notable that Louisiana’s HB154 did not limit the ban to AI-generated false communications; rather, it would have applied to any oral, visual, digital, or written material containing manipulated false imagery, audio or video, greatly expanding the restriction’s breadth.

bool(true)
bool(true)
string(2) "50"

AI in Elections

New R Street paper explores AI impacts on the election information environment, cybersecurity, and election administration to define and assess risks and opportunities.

While the legality of disclosure laws has not yet been tested in the context of AI, the U.S. Supreme Court has largely accepted the use of disclosures to inform the public about things like funding sources for campaign advertisements. Of course, this does not guarantee that AI disclosure laws will pass constitutional muster—and even if they do, the practical impact remains to be seen.  

For example, compliance is likely to be highest among official campaigns and political action committees already accustomed to navigating election regulations. They may even over-label content as being AI-generated out of an abundance of caution, which could dilute the disclosure’s effectiveness. On the other hand, bad actors seeking to confuse voters may simply disregard the requirement while foreign actors remain beyond the reach of any U.S.-based laws.

Fortunately for Louisiana and the other 32 states that opted to sit out this year’s wave of AI regulation, the 2024 election will provide an opportunity to learn from the states that did jump at the chance to regulate AI in elections. The experience of these early adopters will provide more clarity on the legality of the restrictions as well as their effectiveness at protecting the public from electoral deception, which can help inform the appropriate policy moving forward.

In the meantime, there are ways to guard against deceptive election information outside of state laws restricting deepfakes and AI. For example, local election officials can solidify their role as trusted sources for election-related information by conducting proactive outreach and communicating with the public in advance of the election. Similarly, the private sector and the media can partner to raise public awareness around AI risks and the importance of maintaining skepticism when consuming election-related information.

All hope is not lost that the I-10 corridor states could one day align on this issue. After completing the additional study urged by Gov. Landry, Louisiana may reach the conclusion that a disclosure requirement or targeted prohibition is necessary to counter the harms of deceptive AI. Or even better, the other seven states could make a U-turn and join Louisiana in protecting free speech regardless of the technology used to create it.

Get the latest artificial intelligence research, analysis, and events in your inbox. Sign up for the R Street newsletter today.