How a New Tool By MIT Researchers Counters AI Image Manipulation Threats

Keeping tabs on what’s real

  • A new tool created by MIT researchers can help detect when AI has altered photos.
  • Experts say that AI photo manipulation is a growing problem.
  • Digital watermarking is another way to prevent photo fraud.
A multiple exposure portrait.
Multiple exposure portrait.

Eugenio Marongiu / Getty Images

It's getting harder to tell when photos are manipulated by artificial intelligence (AI), but help is on the way. 

MIT CSAIL researchers recently developed "PhotoGuard," a new AI tool designed to counter unauthorized image manipulation by models such as DALL-E and Midjourney. Experts say such methods are increasingly necessary. 

"The ability to prevent these types of manipulations represents an extremely important component in the battle against mis- and disinformation where image manipulation is commonly used to reinforce false narratives and make them more believable," Jason Davis, a Syracuse University professor whose research focuses on the detection of misinformation and disinformation using AI tools, told Lifewire in an email interview.

"Preventing image manipulation is also an important line of defense when it comes to protecting brands and reputations at the corporate level or preventing personal harassment and cyberbullying at the individual level."

AI Distortions

PhotoGuard employs pixel alterations within an image to disrupt an AI algorithm's comprehension, impeding its ability to interpret the picture accurately. 

"Our method involves 'immunizing' images through the addition of imperceptible adversarial targeted diffusion model and thus prevent it from producing realistic modifications of the immunized images," the researchers wrote in their paper. 

AI algorithms and deep learning capabilities allow it to perform seamless image manipulation, including face morphing and generating lifelike forgeries. Although it unlocks creative potential, observers say this technology raises ethical concerns due to the risk of deceptive and misleading content.

"As humans, we place a significant level of trust in what we can see with our own eyes, and we tend to extend this instant credibility to images naturally," Davis said. "The rapid evolution of technologies that enable images to be manipulated using simple text prompt interfaces and low compute power devices such as phones absolutely takes advantage of this vulnerability."

People need tools and an ecosystem of authenticity to help them determine if they can trust the imagery they are viewing.

AI image manipulation is common and has been going on for longer than many people think, Davis noted. From automated AI filters that adjust things like picture brightness and skin tone to removing that photobomber in your picture with a touch, all of it is a form of AI-driven image manipulation. 

"While these are powerful tools for creating the images we want, they can just as easily be used for malicious purposes that no longer require a person to be an expert in these tactics," he added. "With capabilities and access to these tools continuing to grow exponentially, we are likely in for a major synthetic wave of manipulated or completely artificial digital imagery."

Methods to Address AI Image Threats

Watermarks can be another way to protect digital media from AI manipulation, Stu Lipoff, an IEEE Life Fellow, said via email. One type of watermark is "robust," designed to be hard to remove from the media even after you edit it, resize it, or digitally compress it, Lipoff said. The robust watermark allows you to determine the source of who created the original media. Another kind of watermark is a "frangible" watermark that, like a robust watermark, is hidden and requires special processing to read this watermark.  

"Someone who wants to trace the origin of a document can read the robust watermark to determine the source of the media item and then try to read the frangible watermark," he added. "If the frangible watermark is damaged, you can tell if the media item has been altered or tampered with."

Taking a photo with a phone, using an AI filter.
AI filter on a smartphone.

Tero Vesalainen / Getty Images

Various image analysis techniques can also detect signs of manipulation, such as inconsistencies in lighting, shadows, or unnatural textures, Shashank Agarwal, a scientist at CVS Health, noted in an email to Lifewire. For example, analyzing noise patterns, metadata, or compression artifacts can provide clues about an image's authenticity.

But tools that simply mark computer-generated content don't really address the true challenge of AI manipulation, Ken Sickles, the chief product officer of Digimarc, a company that makes anti-counterfeiting software, said in an email to Lifewire.

"Computer-generated content will increasingly be used for legitimate purposes, like advertising. Whether AI created or altered the imagery isn't the real issue," he added. "People need tools and an ecosystem of authenticity to help them determine if they can trust the imagery they are viewing."

Was this page helpful?