Is That Real? Deepfakes Could Pose Danger to Free Elections

August 24, 2023
No face person with political propaganda in the background

(Photo illustration by Emily Faith Morgan, University Communications)

University of Virginia political experts are raising alarm that political operatives could develop attack ads using computer-generated “deepfake” videos and recordings and deploy them at the last minute to turn the tide of an election.

The possibility is real enough that the Federal Election Commission last week opened public comment on a proposal to include deepfakes in its prohibition of fraudulent representation in campaign advertising.

A deepfake is a computer-created manipulation of a person’s voice or likeness using machine learning to create content that appears real. They are used for everything from advertisements to entertainment, hoaxes to harassment.

“I am especially concerned about the use of AI for voter manipulation – not just through deepfakes, but through the ability of generative AI to be microtargeting on steroids through text message and email campaigns.”

— Carah Ong Whaley, academic program officer for the UVA Center for Politics.

“Political consultants, campaigns, candidates and even members of the general public are forging ahead in using the technology without fully understanding how it works or, more importantly, all of the potential harms it can cause,” said Carah Ong Whaley, academic program officer for the UVA Center for Politics.

“I am especially concerned about the use of AI for voter manipulation – not just through deepfakes, but through the ability of generative AI to be microtargeting on steroids through text message and email campaigns,” she said.

The Lab Our Nation Turns To For Saving Lives On The Road, to be great and good in all we do
The Lab Our Nation Turns To For Saving Lives On The Road, to be great and good in all we do

The race for the Republican Party presidential nomination has already seen a bit of deepfakery. In June, Florida Gov. Ron DeSantis’ presidential campaign “War Room” released a video that included realistic deepfaked photos of frontrunner and former President Donald Trump hugging and even kissing the nose of Dr. Anthony S. Fauci, former director of the National Institute of Allergy and Infectious Disease.

DeSantis’ campaign admitted the photographs were fakes. They noted that Trump’s campaign had used AI-generated photos of the Florida governor riding a rhinoceros and a post that included doctored video of GOP presidential hopeful Chris Christie giving a speech while holding a plate of doughnuts.

Both of the Trump campaign posts were obvious fakes.

In a July advertisement placed by a powerful political action committee supporting DeSantis in Iowa’s Republican presidential primary, a voice that sounded like Trump’s criticized Iowa’s governor.

“I opened up the governor position for Kim Reynolds,” the Trumpian utterance states. “And when she fell behind, I endorsed her. Did big rallies and she won. Now she wants to remain neutral. I don’t invite her to events.”

The words were Trump’s, but the voice was not. The words came from a social media post, but the voice was a synthetic, computer-generated imitation.

“This capability makes it possible to create audio and video of real people saying and doing things they never said or did,” wrote UVA cyber privacy expert Danielle Citron in the foreword to a 2019 paper she wrote with Robert Chesney of the University of Texas Law School. Citron is the Jefferson Scholars Foundation Schenck Distinguished Professor in Law and the Caddell and Chapman Professor of Law at the UVA School of Law.

“The potential to sway the outcome of an election is real, particularly if the attacker is able to time the distribution such that there will be enough window for the fake to circulate but not enough window for the victim to debunk it effectively (assuming it can be debunked at all),” she wrote.

Headshot of Law professor Danielle Citron
Law professor Danielle Citron warned that a well-timed deepfake could turn an election. (UVA Law photo)

The Trump deepfakes weren’t the only ones to circulate widely this year. In March, when a series of bank failures shook the economy, an audio clip of what sounded like President Joe Biden giving a dire assessment of the state of the banking system spread widely across the internet. It was not real.

An April report by the Congressional Research Service, a public policy research arm of the U.S. Congress, cited concerns that deepfakes could also be used by other countries to meddle in the election.

“State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately,” the service reported. “Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election.”

The service also notes a concept proposed by Citron and Chesney of a “liar’s dividend,” in which someone legitimately caught doing wrong uses the existence of deepfakes to deny their actions.

“A skeptical public will be primed to doubt the authenticity of real audio and video evidence. This skepticism can be invoked just as well against authentic as against adulterated content,” the authors wrote.

Making an opponent look bad is a political tradition.

“Doctored photos and video footage, candidates’ comments taken out of context; it has already been used for decades in campaigns,” Whaley said. “What AI does is dramatically increase the scale and proliferation, leaving us numb and, hopefully, questioning everything we see and hear about elections. For some voters, exposure to certain messages might suppress turnout. For others, even worse, it could stoke anger and political violence.”

Citron and Chesney worry that social arguments over what facts are true, could make lies readily accepted by voters.

“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the authors wrote. “Deepfakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.”

There are efforts in Congress and the federal government to regulate the use of artificial intelligence-generated media. The Federal Election Commission on Aug. 16 opened public comment a proposed rule including AI productions under regulations of fraudulent misrepresentation of campaign authority.

The proposal came about after a July petition from Public Citizen, a nonprofit advocacy organization. Comments on the proposed rule, REG 2023–02, must be submitted to the commission on or before Oct. 16.

Whaley worries that a well-timed deepfake could wreak havoc at election time, perhaps spreading via social media. She noted there are 65 elections in 54 countries slated for 2024.

“With significant changeover in leadership at social media companies, especially X, I question whether the policy and technical teams are in place to fully assess the proliferation of malinformation across platforms,” she said. “This is particularly troubling given that malinformation online and organizing online can spill over into political violence in the real world. Think Charlottesville 2017 or Jan. 6, but much, much worse.”

Media Contact

Bryan McKenzie

Assistant Editor, UVA Today Office of University Communications