Skip to main content
IDEAS

Deepfakes are getting better. Should we be worried?

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they've never said. There is rising concern that US adversaries will use new technology to make authentic-looking videos to influence political campaigns or jeopardize national security.AP

Like so much new technology, it started with porn.

A Reddit user calling himself “deepfake” transposed the heads of celebrities onto the bodies of porn actresses, and claimed to have used an algorithm to do so. And what we now call deepfakes were born.

These doctored videos that use deep learning algorithms (this is the “deep”) to produce synthetic imagery and voice (the “fake”) warp images and videos convincingly, mimicking speech or facial expressions so as to make it appear that someone has said or done something they haven’t. And as they become more widespread and more sophisticated, they’re provoking ever-deeper anxiety, raising questions about whether they can be used to influence political elections, or create a climate of fear and distrust.

Last month, China announced new rules essentially banning the publishing and distribution of deepfakes by making it a criminal offense to create “fake news” using artificial intelligence or virtual reality without explicitly flagging the content as fake. The state of California has preemptively banned the distribution of “malicious” manipulated videos, audio, and pictures that mimic real footage, and which intentionally falsify the words or actions of a political candidate, within 60 days of an election.

Advertisement



And in October, the US Senate passed the Deepfake Report Act, demanding the Department of Homeland Security launch an annual study of deepfakes and any AI-powered media that “undermine democracy.”

It’s way too early to know whether such prohibitions can prevent deepfakes from circulating in the first place. In fact, you may have already seen one yourself: There’s the faked video of Barack Obama calling President Trump a “dipshit” or the deepfake videos of Boris Johnson and his rival, Jeremy Corbyn, in which each candidate appears to endorse the other during the recent election. Some analysts worry that Russia or other malicious actors could use deepfakes in a bid to disrupt the 2020 presidential election.

“We’ve already seen things like people manipulating videos of Hillary Clinton and Elizabeth Warren. And those aren’t particularly sophisticated,” says Ethan Zuckerman, director of the Center for Civic Media at the Massachusetts Institute of Technology. “You don’t necessarily need the most cutting-edge technology to be successful at this; what you need is ill intent, a credulous audience, and a way to get something amplified.”

Advertisement



I worry about this, too. There are nearly 15,000 deepfake videos online, according to a report by DeepTrace Labs released in October — their number has doubled since last year. They’ve been implicated in cases of sexual privacy violations, and Joan Donovan, director of the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center, warns that the technology may be particularly destructive to populations where persistent discrimination already exists.

That’s why I helped with “In the Event of a Moon Disaster,” an art installation and film produced by the MIT Center for Advanced Virtuality that reimagines the 1969 moon landing. The project is meant to inform the public about deepfakes — and show how easy it is to create them.

As a journalist and an emerging technology researcher, I know I can play a part in raising the alarm about new technology that can perhaps be used to mislead the public. I also worry that women and vulnerable communities may be hit hardest by the digital forgeries — especially as legislation meant to quell negative effects is primarily focused on protecting public figures.

But I didn’t have a full sense of what we should be doing to prepare for deepfakes until I spoke with some of the biggest thinkers on the subject. What they told me was alternately worrisome and hopeful. We’re at a moment when we can literally no longer believe our own eyes — seeing is not necessarily believing. Video is not a substitute for truth, Zuckerman says — at least not anymore.

SAM GREGORY IS the program director of Witness, an advocacy project that centers on the power of video as a tool for transparency. Witness trains activists and civic journalists to use video and technology to expose human rights abuses — and deepfakes may pose a direct threat to the integrity of their work. “We spent the last 18 months very directly focused on how you can prepare for deepfakes rather than panicking,” says Gregory.

Advertisement



He starts off by saying what we shouldn’t do: teach the typical Internet user to spot deepfakes. “I think it puts way too much pressure on ordinary people,” Gregory says.

It may be possible to detect some of the poorly generated deepfakes currently online with the naked eye. There are often telltale signs such as lack of blinking or distortions in light and shadow. But as crafty developers iron out the kinks and as the algorithms get smarter, it will quickly become much harder to spot the evidence.

That leaves the work of detection to high-tech firms like DeepTrace Labs that are developing analytical back-end systems that would detect fake videos. And there is some reason for optimism. Henry Ajder, head of communication and research analysis at DeepTrace, promises a benchmark of confidence “in the high 90s.”

There is some promising work in academia, too. Amit Roy-Chowdhury, a professor of electrical and computer engineering at the University of California, Riverside, and director of the Center for Research in Intelligent Systems, has developed a deep neural network architecture that can recognize altered images and identify forgeries at the pixel level.

His system, according to a paper published this year, can tell the difference between manipulated images and unmanipulated ones by detecting the quality of boundaries around objects — boundaries that get polluted if the image is altered or modified.

Advertisement



While his system works with still images, in theory the same principle — more or less — can be applied to deepfake videos, which consist of thousands of frames and images, to detect tampered-with objects. “Researchers can build on some of the aspects to detect deepfakes,” he says.

But despite solid efforts, most researchers agree that the process to detect deepfakes “in the wild” is a different ballgame. And none of the experimental detection techniques, so far, are available for use by the public.

Even as the counter-measures ramp up, they may have trouble keeping pace.

“Unfortunately, ultimately, as our technology gets better, combating deepfakes will become increasingly difficult," says Aleksander Madry, an associate professor of computer science at MIT whose research is centered on tackling key algorithmic challenges in today’s computing and developing trustworthy AI. "So currently this is more of a cat and mouse game where one can try to detect them by identifying some artifacts, but then the adversaries can improve their methods to avoid these artifacts.”

“Better approaches may deceive the detection mechanism,” agrees Roy-Chowdhury. The computer scientist says it’s highly unlikely that “we’ll have a system which is able to detect each and every single deepfake. Typically, security systems are defined by the weakest link in the chain.”

Some still fear that the public, being the primary consumers of deepfake media, may become caught in a technology tug-of-war between two camps: rogue deepfake developers sharing their codes widely, and their opposite numbers of researchers and tech companies working to create reliable detection tools that remain, so far, exclusive.

Advertisement



It will take more than technology, in other words, to contain any potential damage from deepfakes.

DISINFORMATION IS A timeless problem. The fight over deepfakes, as the academics Britt Paris and Joan Donovan put it, is “an old struggle for power in a new guise."

Whether it’s a paper article, a doctored photo, or an expensive deepfake video, it will always fall to eagle-eyed, rigorous journalists and seasoned experts to separate truth from lies, even as the line between the two becomes murkier. But Judith Donath, a fellow at Harvard’s Berkman Klein Center for Internet & Society and author of “The Social Machine,” says journalists can also play a destructive role in the spread of manipulated video.

“Deepfakes are not going to travel fast and far without media amplification,” she says. For deepfakes to be lifted from the landfill of social media content, reach an audience, and gain massive traction, they’ll have to be backed by traditional media, she adds. “Journalists are going to play a role in pointing people’s attention in this direction.”

One single incident of a deepfake may not lead to a permanent distortion of facts, says Donovan, but “over time, people are going to be skeptical of evidence like photos and videos. With the declining trust in the media, that’s a very toxic combination.”

Donovan says the other important player is the social media platforms: “They’ve been really weak on enforcement related to harassment, incitement of violence, and hate speech."

The spread of deepfake pornography is a case in point, she says. It has involved identify theft and non-consensual image sharing, including “revenge porn" — and yet, the platforms have done little to curb it.

The veteran researcher says that until platforms improve security and privacy, people might have to rethink how they use the Internet and social networks, and that they should exercise vigilance about sharing access to their personal media. “We have entered in a frame where ‘online’ is not something we go on, it’s with us every day, in our pockets,” says Donovan. “Platform companies have a duty to mitigate certain harms, especially harassment.”

Zuckerman, who also believes that platforms have a role to play, cautions however against turning social media platforms, such as Facebook or Twitter, into arbiters of free speech or censors, and suggests instead urging platforms to be transparent with journalists “about what is potentially misinformation and therefore needs to be debunked and debugged.”

One way to mitigate harm, according to Donath, is for platforms to classify media by source: “If you care about what’s real, you have to care about where it’s coming from," she says. "Right now they make everything very generic.”

Gregory suggests that social platforms empower users by “giving [them] signals when they detect manipulation, especially if that manipulation is invisible to the naked eye, or not easily detectable to journalists and fact checkers, which would apply to deepfakes.”

BUT EVEN AS we talk about how to handle deepfakes, experts say, we should recognize that less sophisticated deception is already in our midst — and may be just as destructive.

Adam Berinsky, professor of political science at MIT and the director of the Political Experiments Research Lab, conducted some behavioral experiments over the summer to see whether people are more affected by deepfakes or false text. Early results showed there isn’t much of a difference. Visual evidence, no matter how sophisticated, will only convince people to a point. “Human persuasion has its limits,” says Berinksy.

Other experts agree that false information doesn’t always need to be conveyed via slick new technology to be effective, especially in the context of politics and elections. “The photorealistic simulation is not necessarily that which is the most persuasive. It’s often cheap fakes rather than deepfakes that are important,” says Elizabeth Losh, a media theorist and author of “The War on Learning.”

“Cheap fakes” or “shallow fakes” can include videos that are edited out of their context, or old videos falsely presented as new — such as viral pictures showing massive swaths of the Amazon rainforest burning, thought to be from the 2019 fires, but later revealed by The Guardian as pictures taken in 1989. That is the sort of deception that could be employed in the 2020 presidential race.

Losh points to a satirical tweet about Democratic hopeful Pete Buttigieg planning a protest against Chick-fil-A restaurants that was later recirculated with the suggestion that it was real. “It was intended to be funny, satiric, but the [made-up] story ended up having legs," she says. "Things can have second lives, especially when people recontextualize them.”

Ultimately, whatever the technology and whatever the counter-measures, people may just believe what they want to believe. It’s what psychologists often refer to as confirmation bias, a form of selective reasoning that favors information that corroborates one’s own belief and ignores what doesn’t, Berinksy says. They may not listen to fact checkers who don’t share their political views.

Says Donath: “There’s no solution for people who have no interest in whether what they’re reading or seeing is true or not.”


Pakinam Amer is an award-winning journalist, a former Knight Science Journalism fellow, and a research affiliate of the Center for Advanced Virtuality at the Massachusetts Institute of Technology. Send comments about this story to ideas@globe.com.