Skip to main content
OPINION

We made a realistic deepfake, and here’s why we’re worried

Our video demonstrated the potent combination of synthetic media’s capacity to fool the eyes and social media’s capacity to reach eyeballs.

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they've never said. There is rising concern that US adversaries will use new technology to make authentic-looking videos to influence political campaigns or jeopardize national security.AP

In July, we released a deepfake of Richard Nixon giving an Oval Office speech informing the public that the Apollo 11 mission ended in tragedy. Our aim was to help people understand deepfakes — the use of artificial intelligence to make fake videos or recordings that seem real. But imagine if it was intended to be used for misinformation instead.

In Event of Moon Disaster,” produced by the MIT Center for Advanced Virtuality, combines edited archival NASA footage and an artificial intelligence-generated synthetic video of a Nixon speech, along with materials to demystify deepfakes.

After our video was released, it reached nearly a million people within weeks. It was circulated on social platforms and community sites, demonstrating the potent combination of synthetic media’s capacity to fool the eyes and social media’s capacity to reach eyeballs. The numbers matched up: In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon’s synthetically altered face was real and 65 percent thought his voice was real.

When deepfakes came under the spotlight last year, some media ran with sensational headlines that signaled the “end of news” and “collapse of reality” — but how worried should we be?

Advertisement



The manipulation of media, both creative and nefarious, is not new. Society has long produced media with the capacity to cause harm. Consider Julian Dibbell’s 1993 Village Voice article “A Rape in Cyberspace,” reporting on a traumatic, then new, experience of a woman’s avatar being abused in an online multiplayer game.

The technology was different, but the harsh impacts on human behavior, and the anxiety around the blurring of virtual and physical worlds, were similar to what we are experiencing with deepfakes now.

It’s not the technology alone, it’s also how we share it, watch it, regulate it, and how we believe it when we want to believe it — even if we know it’s fake.

There were over 49,000 deepfake videos in circulation as of last June, with over 134 million views, 96 percent of which are deemed “non-consequential” deepfake videos, e.g., fake pornography. However, even in this realm there are applications such as DeepNude, which remove women’s clothing in images without their consent, just as in Dibbell’s example of virtual abuse. Victims of fake pornography, like Noelle Martin, whose pictures were stolen and used in explicit videos, often sustain trauma and, despite their best efforts, cannot have the offending fakes removed.

Advertisement



The current state of the deepfake media ecology.

So far, political deepfakes are mostly satirical and understood as fake. But nefarious actors can also create videos making puppets of their enemies and amplify them on social media with damaging impacts that could even influence elections.

Just last month The New York Times described how congressman Steve Scalise posted an insidious video showing Joe Biden taking a false position — yet this was not a high-tech deepfake, it used more rudimentary techniques.

Indeed, simpler forms of manipulation are also a threat. Cheapfakes is a term Joan Donovan and Britt Paris coined to describe simple video editing techniques of speeding up, slowing down, cutting, and recontextualizing existing footage to deceive. For example, a video of House Speaker Nancy Pelosi appearing drunk is cheapfake.

While this category of misinformation is the bigger problem now, that will change as deepfakes become easier to create, according to Sam Gregory of Witness.

Why the deepfake media ecology matters.

Video can be a powerful and positive force, useful in holding people accountable, said David Rand, associate professor of management science, brain and cognitive sciences at MIT. “One concern is that as deepfakes get more and more common, they will erode that power of video to hold bad people accountable for bad things, because that video can be written off as fake.”

Advertisement



In the wake of Black Lives Matter protests, Missouri Republican congressional candidate Winnie Heartstrong tweeted that George Floyd’s death was a hoax, the images “created using deepfake technology — digital composites of two or more real persons.”

Imagine a zero-trust society, where anything can be dismissed as a forgery and everything can be plausibly denied? The worst case is a dystopia.

It all comes down to us and our willingness to confront our own confirmation biases.

“We’re the bug in the code,” Danielle Citron, professor of law at Boston University, told Scientific American. “We have all these studies . . . about how even if you say something is a lie, if it confirms your own beliefs, we still believe the lie.”

When CNN reporter Donie O’Sullivan showed Trump supporters how some Biden videos were faked, one of them casually shot back, “You call it a fake video. What it is, is an Internet meme.”

What we can do.

Combatting misinformation in the media requires a shared commitment to human rights and dignity — a precondition for addressing many social ills, malevolent deepfakes included. Along with this commitment, there are several ways in which we can guard against misinformation, both cheap and deep: counter technologies, regulations, and public awareness.

Counter technology is important and reassuring, but it isn’t a magical pill. Digital forensics researchers like Siwei Lyu at the State University of New York, Buffalo, have been developing algorithms that can spot digital traces. Even Lyu admits that these aren’t bulletproof. And as each algorithm is developed to spot deepfakes, programmers will develop more sophisticated techniques to circumvent the algorithm.

Advertisement



Regulations can also play a role. In Texas and California, it is illegal to create a deepfake with the intent of injuring a political candidate or to influence an election. That’s a start.

This is where the public’s media literacy comes into the picture.

“If you have highly critical media consumption habits, you’d probably be more resilient,” said Wilson Center disinformation fellow Nina Jankowicz.

When we consume media, it helps to evaluate its source, cross reference, or look for factual errors. This is just the kind of demystification of deepfake technology that “In Event of Moon Disaster” aims for.

The bottom line.

It’s worth remembering that deepfakes are also one tool in a line of innovations, from Granville Woods’s telephone to Photoshop, and now synthetic media. Media technologies can help keep us connected. They can be used for activism. They can make educational art.

As society fractures, media forms turn into ammunition, with misinformation used to stoke oppression, division, and violence. It is up to each of us to reckon with the limitations of our particular perspectives and the failures of our society to support justice for all, which must be a unifying cause.

It’s not the technology we’re worried about. It’s us.

Francesa Panetta is an artist and experimental journalist and Pakinam Amer is a science and multimedia journalist. D. Fox Harrell is a professor of digital media and AI at MIT and is director of the MIT Center for Advanced Virtuality, where Panetta is XR creative producer and Amer is a research affiliate. Panetta codirected "In Event of Moon Disaster” with Halsey Burgund.