Legislative sessions are in full swing across America, and for the second consecutive year, states are considering bills that crack down on the use of artificial intelligence (AI)-generated deepfakes in election communications. There was a high degree of concern heading into the 2024 election that AI would be used to turbocharge the spread of misinformation about candidates and the election process. In response, 16 states approved bills aimed at curbing deceptive uses of AI, bringing the total number of states with these restrictions to 20. Now, despite little evidence that AI impacted the 2024 campaign at the level originally feared, lawmakers in 25 states are considering a fresh round of proposals in 2025. This analysis will explore the current legislative landscape and discuss key policy considerations for lawmakers advancing proposals to regulate the use of AI in certain forms of political speech.

Overview

So far this year, lawmakers in half the states have introduced bills dealing with the use of deepfakes, AI, and other forms of digital technology in an election context. The bills establish AI restrictions for the first time in 21 of these states, while lawmakers in Mississippi, New Hampshire, New York, and Texas introduced bills revisiting laws passed in previous years. Lawmakers in Virginia, South Dakota, and Kentucky sent laws to the desks of Gov. Glenn Youngkin, Gov. Larry Rhoden, and Gov. Andy Beshear and now await their decisions to sign or veto. Additionally, bills in Montana and Maryland have advanced through each state’s senate and now await action in the house.

Bills in Kentucky, Maryland, Montana, South Dakota, and Virginia have seen strong support overall, with 83 percent of total votes cast in favor. The Virginia Senate vote on SB775 was the narrowest margin so far, with 22 supporting and 18 opposed. On the other end of the spectrum, SB 4 passed the Kentucky house and SB 164 passed the South Dakota Senate with over 90 percent voting in favor. Any opposition has come from Republicans—only one Democrat has cast a dissenting vote against any one of these proposals in 2025.

While the most common approach to regulation is establishing a labeling requirement for deceptive AI-generated content related to elections, proposals in Maryland, Massachusetts, and New York would instead regulate this content by prohibiting it. The distribution between the two approaches this year is consistent with the first 20 states to approve these laws in that 17 states imposed a labeling requirement while three—California, Minnesota, and Texas—took the prohibition approach.

Prohibition and Disclosure

One likely reason that most states favor the disclosure policy over prohibition is concern about regulating speech in light of the First Amendment. A federal judge blocked California’s prohibition law in 2024 over concerns that it violated the First Amendment, writing that AB 2839 “unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.” A prohibition law in Minnesota faces a similar legal challenge. While there are no guarantees that disclosure requirements will stand up to legal scrutiny if challenged, most lawmakers are opting to pursue the lighter-touch disclosure policy not yet blocked in federal court.

Two exceptions to that general trend are Maryland and Massachusetts. Both states are considering legislation that would prohibit the use of AI in certain types of election communications, but they are doing so in ways that seek to address First Amendment concerns. For example, Maryland’s SB 361 prohibits the use of “synthetic media” for deceptive purposes by including it under the definition of fraud. The bill remains problematic as drafted; however, the attempt to deal with the deepfake challenge using familiar legal concepts is a thoughtful approach that seeks to narrow the prohibition’s application.

Meanwhile, S.44 in Massachusetts primarily prohibits deceptive content dealing with the voting process itself, such as the time and date of an election, registration requirements, and certification. However, the law also applies to false candidate or proposition endorsements. The bill would have set a meaningful limitation had it addressed the voting process only, avoiding speech about political candidates and issues and focusing on combating misinformation about election mechanics.

Despite creative attempts to narrow their application, it remains unclear whether either one of these bills could withstand review under the strict scrutiny standard that would likely apply.

The disclosure bills also include provisions that narrow the application to varying degrees. Examples include:

Overall, a narrowly tailored disclosure requirement would stand a better chance under judicial review than the prohibition approach. However, even if their restriction on speech is ruled constitutional, implementing these policies can be problematic from a limited-government perspective in that they often expand the government’s role in regulating political discourse and set the stage for unintended consequences that stand to erode trust in elections.

Implementation Considerations

One argument in favor of disclosing AI-generated election content is that it is simply a public transparency measure. However, the primary mechanisms used to enforce compliance—civil penalties, injunctive relief, and criminal prosecutions—all contribute to a concerning expansion of the government’s role in policing political speech.

Montana’s SB 25 is a useful example, as it includes all three enforcement approaches. First, on the civil side, it permits a candidate who is falsely depicted in an unlabeled deepfake to seek an injunction from a court to prevent its distribution. Second, Montana’s election regulator, the commissioner of political practices, may investigate a complaint alleging a violation of the disclosure requirement and issue a fine of up to $500 for an initial violation. Third, for repeat offenders, the commissioner is required to refer the case for criminal prosecution by a county attorney or state attorney general, with penalties reaching up to two years in state prison for a felony conviction after three violations.

These mechanisms both expand the government’s role and require courts and regulators to issue rulings, levy fines, or pursue prosecution based on the government’s interpretation of truth or falsehood. The level of deception or factual inaccuracy of an AI-generated communication is often a central factor in determining whether it requires a label. Lawsuits and complaints alleging violations of these restrictions will inevitably turn on what the courts or regulatory agencies deem to be true or false—a distinction that is not always clear in the context of political campaigns. Rather than enlisting judiciary and state agencies to tip the scale in one direction or the other, lawmakers should let the public decide what they believe.

Finally, even under an optimistic scenario of broad compliance with disclosure requirements, unintended consequences could elevate the salience of false information. For example, the theory behind requiring disclosure is that it signals to the public that the content has been manipulated in some fashion and should therefore be viewed with a certain level of skepticism. If that mindset takes hold, the reverse holds that content without the disclosure can be trusted because it has not been manipulated. However, the vast majority of political speech will remain beyond the reach of any federal or state regulation, setting the stage for truly false information—generated with or without AI—to gain additional traction. A better approach is for the government to let the public engage in free speech without imposing potentially counterproductive labeling requirements.

Conclusion

State lawmakers across the country continue to approve new restrictions on deceptive uses of AI in election communications. Most of the proposals focus on requiring labels, though a few states persist in exploring prohibitions that could prove unconstitutional. Regardless of the approach, likely outcomes include an expanded government role in political discourse and unintended consequences that could erode trust in the election process. Rather than pursuing these heavy-handed restrictions, lawmakers should leave it to campaigns and the public to debate and discuss false claims.

Our electoral policy work in your inbox and in the ballot box.