Attack of the Voice Clones

How AI voice cloning tools threaten election integrity and democracy

Attack of the voice clones cover featuring headshots of 8 well-known politicians

New CCDH research showed that popular AI audio cloning tools could be easily manipulated to create dangerous audio clips in the voices of 8 high-profile politicians in 80% of the tests. Cloned voices included: former President Donald Trump, President Joe Biden, Vice President Kamala Harris, UK Prime Minister Rishi Sunak, French President Emmanuel Macron, and others.

Download report Find out more

About

  • We tested six leading generative AI audio tools that can replicate the voices of President Biden, Donald Trump, Vice President Harris and other key political figures.
  • AI tools complied with CCDH researchers’ prompts to produce false statements mimicking the voices of high-profile political figures in 193 of the 240 test runs (~80%).
  • One platform, Invideo AI, not only produced specific false statements – but also auto-generated speeches filled with disinformation.

An intro by CCDH CEO Imran Ahmed

Elections are an expression of our democratic ideals. They represent a peaceful means through which we, the people, are given the power to decide our future, and in which we can test and challenge ideas before expressing our collective wisdom at the ballot box. 

But around the world there are those whose lust for power and influence, or appetite for chaos and seeding mistrust, lead them to subvert these ideals, using the forum of an election to spread deliberate lies that make a meaningful debate impossible, or even overturn collective decisions expressed at the ballot box.

These cynical forces have long been aided by social media companies that have reduced the cost of sharing lies with millions, even billions, of people to virtually nothing. The only cost was producing the content. Now in a crucial election year for dozens of democracies around the world, generative AI is enabling bad actors to produce images, audio and video that tell their lies at an unprecedented scale and persuasiveness for virtually nothing too.1

This report shows that AI-voice cloning tools, which turn text scripts into audio read by your own voice or someone else’s, are wide-open to abuse in elections.

We took the most popular of these tools and tested them 240 times, asking them to create audio of political leaders saying things they had never actually said. Eighty percent of these tests resulted in convincing audio statements that could shake elections: claims about corruption, election fraud, bomb threats and health scares.

This report builds on other recent research by CCDH showing that it is still all too easy to use popular AI tools to create fake images of candidates and election fraud that could be used to undermine important elections which are now just months away.2

But our research also shows that AI companies can fix this fast, if only they choose to do so. We find in this report that some tools have effectively blocked voice clones that resemble particular politicians, while others appear to have not even tried.

It shows we need a level playing field, created by regulations setting minimum standards for AI tools to adhere to. We can do this by updating existing election laws so that they safeguard against AI-generated harms, and demanding human-operated ‘break glass’ measures from AI companies to halt critical failures before it’s too late.

Hyperbolic AI companies often proclaim that they have glimpsed the future, but it seems they can’t see past their ballooning valuations. Instead, they must look to these crucial months ahead and address the threat of AI election disinformation before it’s too late.

Imran Ahmed
CEO, Center for Countering Digital Hate