On Tuesday, Sept. 10, the Federal Elections Commission (FEC) announced that it does not intend to enact new regulations targeting the use of artificial intelligence (AI) to generate deceptive content in elections for federal office—a decision that concludes a yearlong debate over whether the FEC would take action. At first glance, this decision may appear to greenlight deepfakes and other forms of election deception. However, the FEC also confirmed their interpretation that existing federal laws prohibiting “fraudulent misrepresentation” would still apply to AI-generated content. Pending final adoption at the next open meeting on Sept. 19, this technology-neutral regulatory approach focuses on prohibited activities rather than the tools used to conduct them, providing a useful policy framework for lawmakers and regulators adapting to AI and other emerging technologies.

The debate around the regulation of AI in federal elections comes down to how the FEC can or should implement federal law under two circumstances. First, candidates for office (or their employees) cannot misrepresent an opposing candidate for the purpose of electoral advantage. Second, a person cannot misrepresent themselves as being affiliated with a candidate or political party for the purpose of soliciting contributions. Technological advances create an opportunity to commit this type of fraud with assistance from AI, driving some to call on the FEC to take action.

In May 2023, the FEC received a petition for rulemaking seeking to clarify that existing regulations applied if AI was used to conduct fraudulent misrepresentation. The initial petition failed after a split vote, but an amended version was accepted in August 2023, kicking off a public comment period that generated more than 2,000 submissions. Under the agreement announced on Sept. 10, the FEC will conclude its review of that petition without initiating a formal rulemaking but confirmed through an interpretive rule that nothing precludes the existing law from being applied if AI was used in the underlying action that violated the law.

The primary reason cited by the FEC in their decision to forgo additional regulation is that federal law is “technology neutral,” meaning that they do not have the authority to draft rules that apply specifically to AI or any other technology. The FEC did not speak to the merits of adopting further regulations around the use of AI in federal elections, correctly leaving that policy debate to Congress.

However, even if they did have the authority, there is good reason for the FEC to stick with the current technology-neutral plan. Regulating the harmful activity rather than the tool used to perform it will provide a more efficient and effective regulatory framework. As Chair Sean Cooksey points out in a recent article, the FEC lacks the technological expertise to regulate a powerful and complex emerging technology like AI effectively. Consequently, AI-specific rules may fail to achieve their intended purpose—or worse, lead to unintended outcomes that stifle further innovation or restrict political speech.

The FEC’s approach is especially relevant given the recent nationwide trend of federal and state governments taking action to mitigate the potential harms of AI-generated election misinformation. Nineteen states now impose restrictions on deceptive AI-generated election communications, on top of the pending legislation in Congress to prohibit or require labeling of certain AI-generated communications in federal elections. Legislative interest in the topic has accelerated significantly in the lead-up to the election, with 14 states approving restrictions in 2024 alone.

As the pressure on government officials to “do something” about AI-generated deepfakes and misinformation continues to build, they should look to the FEC’s technology-neutral approach, which regulates harmful conduct and activities rather than tools and technologies. Not only will this help manage challenges without sacrificing potential benefits, it will also provide a flexible framework for effectively adapting to the next round of disruptive innovation.

Our electoral policy work in your inbox and in the ballot box.