AI and the 2024 Election Part II: Push for Regulation Falls Short in Washington, D.C.
Artificial intelligence (AI) was widely viewed as a threat to America’s political process in the run-up to the 2024 election because of the technology’s ability to generate highly realistic deepfakes that could be used to deceive voters and undermine trust in democracy. Part I of this series covered the state policy response to these concerns, including the significant increase in laws regulating the use of AI in election communications.
Part II will review how the same policy debate played out in Washington, D.C. as Congress and various federal regulatory agencies took up the issue in earnest. The details of the proposed laws and regulations varied, but all tracked closely with at least one of the two general approaches taken in the states: disclosure requirements and prohibitions.
While efforts to advance federal restrictions on the use of AI in elections did not lead to meaningful changes in policy, federal agencies like the Federal Election Commission (FEC) and the Federal Communications Commission (FCC) identified how existing regulatory authorities can be applied to AI. Meanwhile, the Election Assistance Commission (EAC) and the Cybersecurity and Infrastructure Security Agency (CISA) helped local officials plan for and adapt to the impacts of AI in election administration.
Bills to Enact Prohibition and Disclosure Requirements Stall in Congress
The 118th Congress introduced a flurry of AI bills, including more than a dozen dealing specifically with the use of AI in federal elections. Most of these failed to gain traction, but the Senate Rules Committee approved two bills sponsored by Sen. Amy Klobuchar (D-Minn.) in May: one that established a prohibition and another that required disclosure.
S.2770 took the prohibition approach, targeting election deepfakes by imposing a ban on the use of AI to generate “materially deceptive” election communications and enforcing the restriction through injunctions and monetary damages. Meanwhile, S.3875 required disclosure for AI-generated content used in political advertisements and authorized the FEC to issue fines for violations. While similar in substance to the state laws, congressional authority over most election policy is limited to federal elections—meaning these restrictions would only apply to campaigns for president or the U.S. Congress.
In May 2024, both bills were approved by the Senate Rules Committee on a party line vote with Democratic support and Republican opposition, due in part to concerns over free speech impacts. They have not advanced to the floor since receiving approval, but they technically remain alive until the 118th Congress concludes in the coming weeks.
Amid Pressure to Act on AI, FEC opts for Technology-Neutral Approach to Regulation
The FEC is an independent federal agency that regulates federal campaign finance laws. In May 2023, the agency began looking into extending its current prohibition on fraudulent campaign activities to AI-generated content.
Following more than a year of consideration and more than 2,000 public comments, the FEC declined to issue new rules. Instead, it approved an interpretive rule explaining its view that regulations targeting a specific technology—in this case, AI—are unnecessary because the underlying federal statute prohibiting fraudulent misrepresentation is “technology neutral.” In other words, the fraudulent activity itself is the relevant factor, regardless of the technology used.
FCC Jumps into the AI Fray Early, Seeks Further Expansion of Regulatory Scope
The FCC is an independent federal agency responsible for regulating interstate and international radio, television, wire, satellite, and cable. Traditionally, the FCC’s role in election policy is limited to certain matters related to campaign advertising; this year, it also engaged on issues related to AI and elections.
The first incident involving the FCC was also one of the earliest examples of a high-profile “deepfake” impacting the 2024 presidential campaign. In January, some Democratic primary voters in New Hampshire received a robocall from an AI-generated voice mimicking President Joe Biden that urged them to sit out the primary and instead save their vote for the general election. The deception was quickly identified by the media, mitigating the damage. After investigation, the FCC confirmed that the use of AI-generated voices in robocalls was prohibited under existing federal law and fined the creator of the robocall $6 million. In August, the FCC also initiated a rulemaking to further strengthen the regulations governing the use of AI-generated robocalls and robotexts.
In addition, the FCC announced in July that it was kicking off the process for enacting a disclosure requirement for AI used in campaign advertisements appearing on FCC-regulated television and radio stations. This announcement prompted the FEC chair to write a letter to his FCC counterpart outlining concerns over the rule and that it extends beyond the jurisdiction of the FCC. The outcome of regulations related to AI-generated robocalls and robotexts and the use of AI in campaign advertising are still pending.
Elsewhere in Washington, the EAC and CISA Help Election Officials Adapt to AI
The EAC and CISA are two additional federal agencies who contributed to the federal government’s response to AI in elections, though neither agency created new rules or regulations. Instead, they provided resources and guidelines to local elections officials to help them adapt to the new reality in which AI can impact election administration.
In February, the EAC—an independent federal agency tasked with supporting election officials and helping Americans participate in the voting process—issued a decision allowing election offices to use an existing stream of federal funding to counter AI-generated misinformation about the election process. Authorized by the Help America Vote Act in the early 2000s, these security grants have traditionally been used to replace voting equipment, implement audit systems, improve cybersecurity, conduct cybersecurity training, and—more generally—enhance federal election security. By updating its policy guidance, the EAC provided flexibility for officials on the ground to experiment with different approaches to countering election-related misinformation through public education, leading over time to best practices that can be shared nationwide.
Along these same lines, Congress considered legislation that would have tasked the EAC with developing voluntary guidelines for using and preparing for AI in election administration, including responding to AI-generated misinformation. The bill, S.3897, also required the EAC to produce an after-action report on how AI actually impacted the 2024 elections.
The legislation advanced through the Rules Committee in May alongside the prohibition and disclosure bills mentioned above and was the only bill supported by Republican members. However, S.3897 has met the same fate as the other two bills and not received consideration on the Senate floor. Nevertheless, voluntary guidelines provide a useful approach because they deliver support to local officials while also leaving room for bottom-up innovation.
Finally, the federal government plays an ongoing role in protecting the security of American infrastructure—including election infrastructure—from both traditional and AI-enhanced cyber threats through CISA. Throughout 2024, CISA provided guidance to local election offices on AI security best practices, offered cybersecurity services including vulnerability scanning and in-person assessments, and monitored attempts by foreign governments to disrupt the 2024 election using AI and other traditional methods.
Conclusion
Overall, there was a high degree of interest across the federal government in taking action on the use of AI in elections. This interest did not translate to significant policy change; however, agencies offered guidance on how to apply existing regulations in an AI context. This federal response contrasted with the relatively high level of policy change that occurred in the states discussed in Part I. Part III of this series will explore factors that caused AI to have a smaller impact on the election than originally feared, including the effects of these differing state and federal policy choices.