How AI Could Help Protect Against the Spread of Misinformation

Finding out where it’s worst is half the battle

  • New AI techniques could help identify and combat online misinformation. 
  • AI-generated deepfakes make voice and video scams more dangerous. 
  • Cryptography can also prevent fake information on the web.
A hooded hacker person using a smartphone.
Hacker spreading fake news.

Igor Stevanovic / 500px / Getty images

Misinformation is a growing problem online, but help may be on the way thanks to artificial intelligence (AI). 

Research suggests that machine learning techniques and blockchain technology may combat the proliferation of fake news. The new approach lets content creators focus on fighting disinformation in areas where misinformation is most likely to cause significant public harm. Experts say finding effective methods to combat disinformation is of utmost importance.

"We rely on information to make informed decisions," Manjeet Rege, the director of the Center for Applied Artificial Intelligence at the University of St. Thomas, told Lifewire in an email interview. "So when consumers are unable to distinguish between real and fake information, they could be more susceptible to making poor decisions. With the advent of social media, fake news can go viral quickly and can potentially lead to knee-jerk reactions by the public."

Is AI Creating or Finding Fake News?

A paper produced by researchers from Binghamton University's School of Management advised employing machine learning systems to assess the potential harm of content on its audience and pinpoint the most egregious offenders. For instance, during the peak of the COVID-19 pandemic, fake news promoting unverified treatments over vaccines was identified. 

"We're most likely to care about fake news if it causes harm that impacts readers or audiences. If people perceive there's no harm, they're more likely to share the misinformation," Thi Tran, a professor of management information systems who led the research, said in the news release. "The harms come from whether audiences act according to claims from the misinformation or if they refuse the proper action because of it. If we have a systematic way of identifying where misinformation will do the most harm, that will help us know where to focus on mitigation."

As AI develops and grows more sophisticated, it's becoming harder for individuals to distinguish what is real and what isn't, Sameer Hajarnis, the chief product officer of OneSpan, a digital verification company, noted in an email.  

"For example, AI-generated deepfakes make voice and video phishing a lot more dangerous," he added. "Criminals using social engineering attacks are on the rise, and the threat posed by deepfakes is now widespread."

In a recent incident, Martin Lewis, a prominent consumer finance advocate from the UK, seemingly endorsed an investment opportunity by Elon Musk. However, it was later revealed that the video was, in fact, an AI-generated deepfake, and the investment opportunity turned out to be a fraudulent scam, with no actual support from Lewis or Musk.

Some of the AI-generated content can look very realistic to a human but can be identified as fake fairly easily by an AI model.

Many Approaches to Fighting Disinformation

The Binghamton University approach isn't the only way to help fight fake news. While AI can generate counterfeit audio and videos, it can also be used for detecting authenticity, Rege said. 

"In fact, some of the AI-generated content can look very realistic to a human but can be identified as fake fairly easily by an AI model," he added. 

Another method is to provide proof of personhood using cryptography, cybersecurity expert and  IEEE Member Yale Fox said in an email interview. If you want to record a video and put it on social media, the best thing to do would be to encode the video with a public key

"From there, the video either has the key or it doesn't," he added. "If it has the key encoded, then it's very easy to detect on any platform without even using AI. It can run on virtually any device, in a browser, etc. If the video is being posted on an elected phone, it would have the key and would pass verification tests."

A Fake News key on a computer keyboard.
Fake News.

Peter Dazeley / Getty Images

Fake news is a political and cultural problem as well as a technical issue, Subramaniam Vincent, director of the journalism and media ethics program at the Markkula Center for Applied Ethics at Santa Clara University, said in an email. 

"It will take collaboration and consensus building amongst the AI industry actors and news media companies, and also generating a new thrust towards democratic culture in politics and elections upstream of tech," he added. "All of that will make it easier to counter the inevitable pushback that bad actors create when AI tools are used to detect, label, and stop the distribution of fake news. AI is a powerful element, but not the only one, in the bigger mess of a battle for democracy in America and elsewhere."

Was this page helpful?