Generative AI and Electoral Communications
There is growing consensus that generative AI (GAI) will affect upcoming elections worldwide. There is less consensus, however, as to what the actual effect of GAI on elections will be. Predictions range from catastrophic “October surprise” deepfakes to less tangible, but equally concerning, warnings about a degraded information ecosystem, where discerning the truth is extremely difficult. Without a common perspective on these risks, there is no agreement on how to regulate the use of GAI by candidates and campaigns or to communicate important election-related information.
Although much has been written about deepfakes, disinformation, and the law, most GAI use by campaigns will likely involve leveraging its capabilities to produce otherwise legitimate electoral communications at scale. These scaled communications can take the form of a chatbot designed to answer questions about a candidate, mass microtargeting of voters with ads tailored to their personalities and able to respond to their feedback, or AI phone bank ”volunteers.” There are clear risks to the use of GAI for these communications, including the type of lower-probability but high-impact effects—which this paper will refer to as “tail risks”—some warn can threaten democracy itself, especially if malicious actors use GAI tools to mimic legitimate communications. But there are also benefits, such as helping under-resourced candidates level the playing field. As lawmakers and regulators contemplate rules to shape how GAI tools are built and deployed, they can attempt to craft measures that address the harms of GAI for electoral communications without eliminating the positives.
This paper will survey and analyze the benefits and harms of GAI for electoral communications, along with current regulatory and legislative efforts to address its risks. The paper will proceed in three parts. Part II will describe the current state of GAI use for electoral communications and examine potential positive and negative effects of GAI for electoral communication in the near-term, medium-term, and long-term. Part III will look at current legislative and regulatory approaches in the United States and Europe—including self-regulation by AI companies—that either address GAI for electoral communications or can be used as a basis for crafting similar laws and regulations. Separating these approaches into those that regulate the user and those that regulate the system, it will evaluate how well they address the negative effects of GAI while preserving its benefits. Finally, Part IV will provide some recommendations for lawmakers and regulators who wish to enact measures that prevent or mitigate harm from GAI-enabled electoral communications without eliminating all benefits.
Continue Reading
Evan Chiacchiaro
Georgetown Law J.D. 2025; Georgetown University M.A. 2017; Tufts University B.A. 2011.