Forget Politics. For Now, Deepfakes Are for Bullies

The surging popularity of Chinese app Zao has reignited concern that deepfakes could influence an election. Researchers say that's not likely.
Leonardo Dicapro's face being replaced by fake shapes
Sam Whitney

While Americans celebrated a long Labor Day weekend, millions of people in China enrolled in a giant experiment in the future of fake video. An app called Zao that can swap a person’s face into movie and TV clips, including from Game of Thrones, went viral on Apple’s Chinese app store. The app is popular because making and sharing such clips is fun, but some Western observers’ thoughts turned to something more sinister.

Zao’s viral moment was quickly connected with the idea that US politicians are vulnerable to deepfakes, video or audio fabricated using artificial intelligence to show a person doing or saying something they did not do or say. That threat has been promoted by US lawmakers themselves, including at a recent House Intelligence Committee hearing on deepfakes. The technology is listed at the top of eight disinformation threats to the 2020 campaign in a report published Tuesday by NYU.

Yet some people tracking the impacts of deepfakes say it’s not big-name US politicians who have the most to fear. Rather than changing the fate of nations by felling national politicians, they say, the technology is more likely to become a small-scale weapon used to extend online harassment and bullying.

One reason: US public figures like presidential candidates take—and deflect—a lot of public flak already. They’re subject to constant scrutiny from political rivals and media organizations, and they have well-established means to get out their own messages.

“These videos are not going to cause a total meltdown,” says Henry Ajder, who tracks deepfakes in the wild at Deeptrace, a startup working on technology to detect such clips. “People like this have significant means of providing provenance on images and video.”

The term deepfake comes from a Reddit account that in 2017 posted pornographic clips with the faces of Hollywood actresses swapped in, and later released the machine learning code used to make them. Widely circulated iterations of that software and continuing progress on image manipulation from artificial intelligence labs have made deepfake technology steadily better and more accessible.

Attention-grabbing fake clips of Barack Obama and Mark Zuckerberg, made to demonstrate the technology’s potential, have gained millions of views and have fed ideas about the technology’s election-swaying potential. Researchers and companies such as Deeptrace have ramped up research into technology to spot deepfakes, but the notion of a reliable deepfake detector is still unproven.

Ajder says there’s a “good chance” deepfakes involving 2020 candidates will appear. But he expects them to be an extension of the memes and trolling that originate in the danker corners of candidates’ online fan bases, not something that jolts the race to the White House onto a new trajectory.

Sam Gregory, who is tracking the potential impacts of deepfakes at nonprofit Witness, which promotes use of video to protect human rights, says one reason politicians figure prominently in predictions of a faker future is that politicians themselves have encouraged it. “I imagine it feels very personal,” he says.

US Senator Ben Sasse (R–Nebraska) last year proclaimed deepfakes “likely to send American politics into a tailspin” and introduced a bill that would make it a crime to create or distribute deepfakes with malicious intent. US Representative Adam Schiff (D–California) recently called deepfakes a nightmare scenario for the 2020 campaign.

Gregory believes community activists and journalists in places like South America, the Middle East, and Southeast Asia have more to fear from deepfakes. He helped organize a meeting on deepfakes in Brazil this July that included researchers, journalists, activists, and civic society groups.

The group was more concerned about deepfakes amplifying local harassment than altering national politics. Journalists and activists working on human rights issues such as police brutality and gay rights already face disinformation campaigns and harassment on platforms like WhatsApp, sometimes using sexual imagery, Gregory says.

What little is known about deepfakes in the wild so far supports the idea that that kind of harassment will be the first major negative impact of the technology.

Ajder of Deeptrace is aware of a handful of cases around the world in which a video at the heart of a political scandal was alleged to be a deepfake, but none have been confirmed. The startup’s attempts to track deepfakes circulating online show that pornographic deepfakes are many times more common. They have already become a tool of targeted harassment, similar to revenge porn.

Paul Barrett, author of the NYU report listing deepfakes as a top threat for 2020, argues that uncertainty about deepfakes’ impact doesn’t remove the need to prepare for them in national politics. A well-turned fake clip released in the last 24 hours of a close election, giving little time for a response, could be decisive, he says. “Given the experience in 2016 with Russia, given the volume of domestic disinformation, given the behavior on Twitter and elsewhere of the Republican candidate in 2020, I recommend preparing,” Barrett says.

Gregory of Witness cautions that such calls show how hype about the threat that deepfakes pose to national politics could have serious unintended consequences. If platforms like Facebook and YouTube feel pressured or obliged to swiftly remove alleged deepfakes, their defenses could themselves become a tool to manipulate reality. Politicians or their supporters could use the platforms’ reporting tools to suppress viewpoints they dislike, he says. “The solution could be more damaging to public trust,” he says.


More Great WIRED Stories