It will surprise no-one that AI-generated material is now being used in political campaigns, and that the uses are not always benign. But how exactly is it being used, and what regulations might be necessary?
Voters in a number of states have apparently received robocalls in recent months with AI-generated voices impersonating those of public officials. One call that went out to New Hampshire voters in January, for instance, featured a voice resembling President Joe Biden’s that urged voters not to vote in the primary, claiming that “[v]oting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.”
While the technology can obviously be used unscrupulously, some of the major generative AI platforms will refuse requests for impersonations or even to generate images with certain likenesses.
The image at the top of this article, for instance, was used in part because ChatGPT refused a prompt to create an image depicting the hypothetical scene of President Biden’s fake robocall. It instead suggested creating “an image of a man resembling a politician on the phone, appearing to persuade others with a serious expression, in an office setting with American flags in the background”—which was the basis for the WordPress-generated image above.
Recent research in political communication indicates that generative AI is already widely incorporated into political campaigns in ways that are often less sensational but still consequential.
For instance, a team of researchers from the University of Texas at Austin’s Center for Media Engagement interviewed a number of political campaign operatives about their perceptions and uses of generative AI. Their responses indicate more thoughtful, scrupulous consideration of the impact of such tools on political discourse.
As the researchers summarized, many of these operatives expressed “ardent hope” that the technology would simply be used to “democratize the campaign space” and “allow for increased engagement with marginalized and young voters.” It was already being used, they noted, to “greatly accelerate” the speed with which campaigns could conduct data analysis and “A/B test content and unearth the approaches and language most compelling to target audiences.”
Such affordances would theoretically allow more grassroots campaigns to scale up their operations with limited resources.
Some of the campaign professionals interviewed in the University of Texas study expressed concern that some political actors were not abiding by ethical norms discouraging deceptive uses of generative AI. Going forward, they called for greater regulation in response, especially around disclosure and transparency regarding machine-generated content.
According to a report in Government Technology, a publication that tracks technology regulation, Pennsylvania legislators have taken steps toward such regulation. A bipartisan coalition in both chambers of the state legislature introduced legislation in June 2024 to “impose civil penalties on creators of AI-manipulated campaign videos, images, texts and sounds.” This legislation was in part catalyzed by Pennsylvania voters’ own receipt of deceptive robocalls synthetically mimicking the voices of both local and federal elected officials.
Other similar legislation introduced previously in Florida and Mississippi has imposed criminal as well as civil penalties, though as Government Technology notes, experts disagree about the most effective penalties for deterring violations.
Students in communication studies will have ample opportunity to track and debate the developments in this space through courses in political communication, rhetoric, persuasion, and media law. They can be involved in producing crucial knowledge about the uses of AI-generated messages and their consequences too. Building off of the University of Texas study, for instance, much empirical research remains to be done regarding the types of messages incorporating generative AI technology and the perspectives from those in the industry about what is and should be done.
Whether or not students go on to work in elections and politics, moreover, thoughtfully considering the persuasive applications of AI technology is important for anyone whose work involves crafting messages with particular audiences in mind — a description that applies to many communication studies graduates.