In late September, President Donald Trump posted a racist AI-generated video depicting House Minority Leader Hakeem Jeffries standing before a podium, wearing a sombrero and sporting a mustache, while Senate Minority Leader Chuck Schumer says insulting things about Democrats.
In mid-October, the government of Ontario aired an anti-tariff ad in the U.S. featuring a clip of Ronald Reagan hammering home the futility of imposing tariffs on foreign goods. Trump charged, erroneously, that the video was an AI deepfake—and that Reagan, he claimed, in fact supported tariffs.
While these two incidents—the first is AI disinformation, and the second is labeling another’s video as such—may seem unrelated, they’re actually very much linked. This is more than just Trump lying and assuming others lie too. In fact, his dissemination of deepfakes and his accusations of deepfakery work together as parts of the same disinformation strategy.
The first part of the strategy is the distribution of high volumes of lies and half-truths via campaign speeches, social media, ads, or TV appearances. The second part is the continual labeling of actual news stories from legitimate outlets as “fake news.” Recall what Trump adviser Steve Bannon told the writer Michael Lewis in 2018: “The real opposition is the media. . . . And the way to deal with them is to flood the zone with shit.”
AI-generated deepfakes represent a dangerous technology upgrade to that same disinformation playbook. In the Schumer videos, Trump’s circle spread the narrative that power-hungry Democrats want to provide healthcare benefits to illegal immigrants. In the case of the Ontario tiff, Trump labeled a credible video as an AI fake. As Trump and his allies create more of their own deepfakes, further sullying the information space, people are more likely to believe that real videos are fake, too.
“[A] skeptical public will be primed to doubt the authenticity of real audio and video evidence,” legal experts Danielle Keats Citron and Robert Chesney wrote in a 2019 law review article. “This skepticism can be invoked just as well against authentic as against adulterated content,” a problem Citron and Chesney dubbed the “liar’s dividend.” The problem may intensify as artificial intelligence models improve and generate video indistinguishable from real, camera-shot video.
As the line between truth and lies disappears, news consumers seeking objective truth eventually grow fatigued. For those who want to create an environment where disinformation thrives, that is a very good result—and not at all a new idea. “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction no longer exists,” Hannah Arendt wrote in The Origins of Totalitarianism, her 1951 book chronicling the rise of Nazi Germany and the Soviet Union under Joseph Stalin.
So far, much of the AI-generated video produced or shared by Trump and his allies—like the recent one of the president in a fighter jet dropping excrement on thousands of “No Kings” demonstrators—has quite obviously been fake. Much of it reprises classic “own the libs” memes. But the administration has been edging toward deepfakes designed to deceive.
In mid-October, Trump loyalists in the National Republican Senate Committee, hoping to blame Democrats for the ongoing shutdown, produced a deepfake video showing Schumer saying “every day it gets better for us”—words taken out of context from a print interview Schumer did with Punchbowl News. The implication: that the Democrats care only about scoring political points during the standoff, not about the damage it has done—and will continue to inflict—on normal Americans.
All that separates that video from being a pure deepfake is the fact that the (AI-generated) senator is shown vocalizing his own words. At the end of the video, Schumer smiles broadly, suggesting that he is cynically enjoying the shutdown. That smile is all AI.
While some states—including California, Minnesota, Texas, and Washington—have added specific language prohibiting AI deepfakes to their election laws, the Federal Elections Commission (FEC) has not followed suit. The FEC considered passing a new regulation specifically targeting deceptive AI-generated content in 2023, but dropped the idea in favor of relying on existing rules on deceptive campaign media, fearing that broadly banning AI-generated content might be beyond its jurisdiction, and that any rulemaking might fail to withstand legal challenges based on free speech rights.
On the Hill, Minnesota’s Democratic Sen. Amy Klobuchar explicitly warned about AI-generated deepfakes in elections in 2024. “Like any emerging technology, AI has great opportunities but also significant risks . . . and we have to put rules in place,” she said in a hearing in April 2024. That year, Klobuchar and Alaska’s Republican Sen. Lisa Murkowski co-sponsored the AI Transparency in Elections Act, which sought to require disclaimers on political ads that use AI-generated or modified imagery or audio. The bill never made it out of committee.
Many AI companies have included in their terms-of-service rules against using their generative models to create synthetic media that imitates real people without their consent. Most have used some form of visual watermark or hidden data to indicate that an image is AI-generated. However, sources say that it’s not hard to find an open-source model that uses none of these safeguards.
Meanwhile, numerous experts have expressed concern that as AI tools mature and become more accessible, more people (including foreign actors) have the resources they need to actively spread falsehoods about political issues, causes, candidates, or campaigns. A 2024 Harvard survey of 1,000 U.S. adults found that 83% of the respondents worried that AI could be used to spread false election-related information.
In a tight congressional election next year, and especially in the 2028 presidential election, all restraint could go out the window. Think the president wouldn’t go that far? According to The Washington Post, Trump made more than 30,000 false or misleading statements during his first term. He was willing to see the Capitol mobbed and defaced if it meant staying in office.
To Trump, a truth is no better than a lie, no matter the format. They’re both just a means to more power.
The early-rate deadline for Fast Company’s World Changing Ideas Awards is Friday, November 14, at 11:59 p.m. PT. Apply today.
  


