AI and the 2024 Election Series Part IV: Policy Considerations for the Future
This is part of the AI and the 2024 Election Series.
The 2024 election was the first presidential contest held since artificial intelligence (AI) entered the mainstream. Part I and Part II of this series discussed how the technology triggered a rapid response from lawmakers and regulators over concerns that AI would be weaponized to deceive American voters. Part III recapped how AI was utilized during the recent election and how its actual effects were more benign than originally feared.
But AI continues to evolve rapidly, and policymakers will likely hear renewed calls for government action to protect elections from AI-generated disruptions. Informed by lessons learned from the 2024 election, the final installment of this blog series will outline a policy framework for addressing AI impacts that prioritizes free speech in the election information environment, robust election cybersecurity, and responsible uses of AI in election administration.
Free Speech in the Election Information Environment
In the lead-up to the 2024 election, federal and state officials sought to protect the public from harmful AI-generated election information by banning or imposing labeling requirements when AI is used in certain political communications. Both approaches serve as cautionary tales to lawmakers considering further regulation in this space, though for distinctly different reasons.
While prohibition is the least common approach among states that regulate the use of AI in elections, an October court decision blocking California’s deepfake prohibition law outlined ways in which this approach likely violates the First Amendment. Full resolution of the case remains pending, but the judge’s initial order granting the injunction found that the state could not justify the speech burdens imposed by the law, writing that it “unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”
In particular, the court criticized the law’s overly broad definitions of “materially deceptive content” that would be subject to restrictions and raised concerns that the law makes state government the arbiter of truth in political disputes. This clearly signals that federal and state lawmakers across the nation should abandon efforts to regulate political speech through bans on the use of AI in certain election communications.
Meanwhile, disclosure requirements—the more common approach to regulating the use of AI in elections—appear to stand on firmer legal ground than the prohibition approach, with none having been successfully challenged in court at this point. However, the impact of the disclosure requirements themselves are unclear, as there are neither examples of the laws being enforced nor clear evidence that the laws meaningfully improve transparency or boost public trust. In fact, labeling content as AI-generated can have unintended impacts that are counterproductive to increasing trust in elections. In combination, this level of legal ambiguity and unclear policy impact suggests that policymakers should not rush to impose new labeling requirements without further information about the potential benefits and risks around this approach.
Rather than enacting new restrictions targeting a specific technology, policymakers should consider how existing laws and regulations apply to AI when used in already regulated activities. The Federal Elections Commission (FEC) relied on this “technology-neutral” approach when it declined to issue new rules in 2024, declaring instead that existing restrictions on fraudulent misrepresentation include AI-generated content. Naturally, this principle applies in a number of other contexts beyond FEC regulations, such as the use of AI to commit fraud or defamation, and it does not require the enactment of new federal or state laws dealing specifically with AI.
Perhaps the strongest argument in favor of a light-touch approach to AI regulation can be seen in the 30 states where AI remained unregulated. Cultural awareness of AI—in conjunction with existing laws around misrepresentation, fraud, and defamation—ensured that AI-specific regulation was unnecessary in 2024. Ultimately, while the government’s attempts to protect voters from AI-generated misinformation may be well intentioned, the public is more than capable of protecting itself.
Robust Election Cybersecurity
Cyberattacks targeting election infrastructure, election offices, and political campaigns have become ubiquitous in the information age. This election cycle saw an uptick in Distributed Denial of Service (DDOS) attacks on election office websites and high-profile hacking incidents including a breach of the Trump campaign in which Iranian hackers stole internal campaign documents. The techniques used were not new threats, but AI may increase their sophistication and make it easier for more bad actors to engage in these activities. For example, the use of AI lowers the barrier to entry for launching high-volume DDOS attacks and generating high-quality phishing emails at scale.
As AI technology continues to enhance the capabilities of cyberattacks, it is essential that America maintain its ability to defend election infrastructure. Currently, the Cybersecurity and Infrastructure Security Agency (CISA) is responsible for providing this protection, with a particular focus on defending against foreign threats. However, as the Trump administration and Republican-controlled Congress seek to achieve major reductions in the size and scope of federal government, there has been discussion around eliminating or restructuring CISA in some fashion. Regardless of how that debate unfolds, policymakers should ensure that the core function of protecting elections from cyberthreats remains intact at the federal level.
The federal government is best positioned to coordinate election cybersecurity because its capabilities and efficiencies cannot be replicated at the state or local level. For example, detecting and responding to cyber threats emanating from foreign countries is a core competency across various federal law enforcement and intelligence agencies, so it makes sense to utilize these capabilities to protect election infrastructure nationwide.
Other services CISA delivers to local election offices could arguably be outsourced to other levels of government. For example, state IT agencies and local law enforcement could conceivably step in to conduct cyber and physical security assessments, assuming the availability of additional resources and time to get up and running. However, Congress should prioritize the sophisticated cybersecurity and threat-monitoring capabilities unique to the federal government to ensure America is best positioned to defend against cyberattacks targeting election infrastructure in the future.
Trust-Building AI Uses in Election Administration
AI holds great promise as a powerful tool that can improve efficiency and effectiveness across many aspects of government operations, including election administration. From chatbots to signature verification, many AI use cases can help election offices do their jobs more effectively. However, it is essential to integrate these technologies in ways that align with the overall goal of improving public trust in elections.
During the 2024 election, officials took steps to establish themselves as trusted sources of election information in an effort to counter the potential effects of AI-generated misinformation about the election process. This focus on restoring trust through proactive communications and transparency about the process was wise, considering the low level of confidence in elections—particularly among Republicans—after the 2020 presidential contest.
National confidence in the election process rebounded in 2024, due in large part to Trump’s victory and renewed faith among his supporters. Election officials now have an opportunity to solidify this faith through continued proactive communication, transparency, and responsible adoption of technology that can build even more confidence over time.
For example, election officials recognize that avoiding small errors on ballots is imperative for demonstrating competence and building trust. That is why ballot proofing is an important (though labor-intensive) step in the process of administering elections. AI can help improve the speed and accuracy of these reviews, providing a powerful assist to the human who will ultimately decide when the ballot is ready for voters.
However, incorporating too much AI too quickly could harm trust in elections. A recent poll of Utah voters found that using AI to help verify signatures would result in a net reduction in election confidence. This suggests that election officials must be methodical in how they incorporate technology and remain mindful of the impacts these tools have on election trust.
Federal and state officials can both play a role in helping election officials strike the optimal balance between technology adoption and increased public trust in elections. At the federal level, the U.S. Election Assistance Commission is well positioned to build on their existing AI resources by cataloguing best practices and providing voluntary guidelines for the effective use of AI in election administration. Similarly, state lawmakers can support and fund pilot programs that experiment with different uses in order to encourage innovation while protecting against harmful impacts to election trust.
Conclusion
The 2024 election served as an initial stress test on how well America’s electoral systems were prepared for the arrival of AI, and the results were positive overall. With knowledge gained from the 2024 experience, policymakers are positioned to pursue policy responses that address actual risks, such as cyberthreats, and harness opportunities to responsibly improve government operations through technology while remaining respectful of Americans’ fundamental right to free expression.