A scientist’s opinion : Interview with Sam Gregory about Deepfakes

Rhetoric about the ‘end of truth’ plays into the hands of people who already are saying you can’t believe anything – and that is neither true of most audiovisual material, nor true yet for deepfakes. We should not panic but prepare instead.

Deepfakes, a scientist’s opinion

Interview with Sam Gregory, Program Director of WITNESS.


How does WITNESS approach deepfakes and how does this fit in its broader work?

Sam Gregory ESMH ScientistSam Gregory: WITNESS has spent the last 18 months focusing on how to prepare better for deepfakes. Our work supports people in their use of video and technologies to show the realities of human rights violations and community needs. Most of my colleagues’ work focuses on how to ensure better evidence of war crimes, human rights violations by the police, as well as ensuring that the platforms used by millions of people are fit for purpose.
We are also concerned about threats to trust in video. However, while there is a real worry here – deepfakes, ‘synthetic media’, manipulation of video and audio are improving technically every day, and becoming easier to produce – we are not in a rush to declare the ‘end of truth’. Instead, we work on how to prepare in a way that centres on the people who’ve been most excluded from discussions about technology’s harms and potential solutions – particularly marginalised people and, more broadly, people outside the US and Western Europe. The best way to mitigate the effects will be to look for multiple partial solutions that complement each other, build on past experience and reflect this global perspective.
Rhetoric about the ‘end of truth’ plays into the hands of people who already are saying you can’t believe anything – and that is neither true of most audiovisual material, nor true yet for deepfakes. We should not panic but prepare instead. We have a window of opportunity to do that, so WITNESS has been working on multiple tracks including leading the first cross-disciplinary expert convenings in the US, and now in Brazil and southern Africa, identifying what people perceive as real threats. We also engage in advocacy towards platforms and leading work on how media professionals can better prepare. We’ve also been doing research on the wide range of potential remedies, including proposed technical solutions and how to ensure detection is fit for many different key stakeholders globally. We are looking at solutions that track authenticity and provenance while recognising critical trade-offs around inclusion, privacy and free expression.


How do you see deepfakes affecting society, and what kind of outcomes are you trying to avoid through your work at WITNESS?

Sam Gregory: Deepfakes and synthetic media can exacerbate existing problems around trust in information and trust in media institutions, as well as expand and introduce new threats to the vulnerable. They also introduce new excuses, such as ‘it’s a deepfake’, to complicate the work of fact-checkers and the media, and to license those in power to claim that anything compromising is fabricated.
We’ve been preparing a set of recommendations on how to stop these outcomes getting worse, relying on a multi-stakeholder, global perspective that finds multiple partial approaches and no one silver bullet.

list deepfake


Are tech policy debates too focused on the West? And if so, what are we missing when we debate deepfakes from a West‑centred perspective?

Sam Gregory: The discussions on solutions – technical, policy and legal – are currently taking place primarily in Silicon Valley, Washington DC and Brussels. But it’s the Global South, and particularly vulnerable communities in the Global South, that have borne the brunt of similar misinformation and disinformation. It’s critical that solutions being proposed be informed by their experience, and reflect the tools they use (e.g. WhatsApp as opposed to Facebook). This is why WITNESS has focused part of our work on leading convening work in Brazil, and upcoming work in southern Africa and Southeast Asia, to enable prioritisation of threat identification and solutions from a non-US or European perspective. Otherwise, there is the risk that the solutions to deepfakes outrun the problems and threats. Before we know it, there will be decisions made in the US and Brussels about how to detect, control and legislate against deepfakes that will have a global impact – so questions such as to whom detection tools are available, how the platforms will handle tools like WhatsApp, and whether laws in the US might be misused globally, are key to talk about now.


Can you name some troubling practices of deepfake‑technology use worldwide?

Sam Gregory: So far, most deepfakes are still within the non-consensual sexual content area – and are most often being weaponised in the same ways and spaces as before – towards public figures, civic activists, journalists and ordinary people. What is also a clear trend is people using the ‘it’s a deepfake’ excuse. Based on the experience to date, our problems with deepfakes are as likely to be about people claiming that real incriminating videos are deepfakes and forcing us to prove they are real, as about actual deepfakes. If this trend continues it places burdens on both judicial systems and news verification processes.
If we start to allow for the ‘it’s a deepfake’ excuse too early, we may precipitate this. The smartphone revolution has enabled so many more people to show realities of police violence or war crimes or expose corruption. However, in our workshops in Brazil, favela‑based groups note that the prevalence of deepfakes and synthetic media, or their supposed existence, will be used to challenge the integrity of any video they shoot as critical evidence of police or military violence. At the same time, just as community leaders have faced reputation-based attacks using previous tools like Photoshop, and using online harassment, they see how this will add to that.


You have spoken about ‘controlled capture’, used by Truepic. Can you explain what it is?

Sam Gregory: Image, video and audio recordings each share similar characteristics – a moment of creation, the possibility of edits, and the capacity to be digitally reproduced and shared. In a nutshell, with controlled capture, an image, video or audio recording is cryptographically signed, geotagged, and timestamped. The idea behind verified or controlled capture is that, in order to verify quickly, consistently and at scale, the applications on offer need to be present at the point of capture. Media is gathered with additional rich metadata, hashed, signed, and run through checks against deceptive strategies like re-capture of an existing video. Truepic is one of a number of apps in this space that include both open source choices like ProofMode and Tella as well as commercial alternatives like Serelay and Amber.


In your opinion, what sort of conflict could arise from proposed deepfake regulation (copyright, freedom of expression, right of publicity, etc)?

Sam Gregory: Laws are on the minds of legislators in Washington DC, and in states across the US. The first laws to be passed so far focus on either non-consensual sexual images or on elections. There is a recent California law that is elections‑focused and I worry that, like other similar pushes to ban ‘deceptive’ content, it leaves that as a broad and abusable category. Laws like this globally would seem to provide an opportunity to challenge legitimate political speech as deceptive when it is advocacy or an edited point-of-view about a person. But it’ll be interesting to see how the California law works – it could yet be a model.
A key worry for us at WITNESS is how US and European models play out when they start to place obligations on platforms like Facebook that are then enforced as de facto law global power. These policies may be enforced in contexts where Facebook or YouTube face greater pressure to comply with the demands of those in power, or where they have failed to adequately resource themselves so that they can make good content moderation judgements. Combine this with the use of AI to detect content without sufficient nuance and we’ve seen how this fails communities and countries globally – for example with content moderation around graphic violence and terrorist content that ends up taking down hundreds of thousands of videos of war crimes evidence in Syria. The platforms end up curtailing free speech and vital evidence. Any laws or policies need to be narrowly scoped to not allow people in power to abuse them, and with transparency and right of appeal for those whose content gets removed.

Related Article