Davos: Tech leaders ponder solution to election tampering online

Content-Type:

News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

“In previous election cycles, we've seen robocalls. We've seen automated text message campaigns that are sending incorrect information to voters about their voting location, about whether or not their poll is open or targeted, manipulated messages that are designed to influence their behaviour,” said Alexandra Reeve Givens, CEO of the Center for Democracy and Technology. [Yeti studio / Shutterstock]

To maintain online integrity in an election-heavy year around the world, tech leaders gathered in Davos debated on Tuesday evening (16 January) the recent rise of AI and its implications for misleading campaigns and deceptive content in the run-up to polls.

This week, world leaders, innovators, entrepreneurs, and prominent academics come together to discuss the world’s most pressing issues at the World Economic Forum in Davos. Part of this year’s agenda was an exchange on the implications of AI to democracy.

With four billion people going to the polls this year, misinformation and disinformation on the internet can have a significant impact on the election results in five of the world’s largest democracies.

The rise of Artificial Intelligence plays a big part in it, either as a threat amplifier or a tool to fact-check information.

Threat amplifier or problem solver?

“In previous election cycles, we’ve seen robocalls. We’ve seen automated text message campaigns that are sending incorrect information to voters about their voting location, about whether or not their poll is open or targeted, manipulated messages that are designed to influence their behaviour,” said Alexandra Reeve Givens, CEO of the Center for Democracy and Technology.

“Generative AI makes it easier to target that. […] it’s easier than ever to come up with those tailored, personalised messages,” she added.

At the same time, machine learning can be applied to predict new threats. Google is one of the tech companies that see even more potential with Large Language Models (LLMs) to build faster and more adaptable enforcement systems.

“We want to be in a position where anything that’s produced by our generative AI can be watermarked so invisibly, even if a snippet of a video or a piece of an image is used, that it will be detectable automatically,” Matthew Brittin, president of EMEA Business & Operations for Google, told Euractiv.

He referred to SynthID, a beta tool from Google DeepMind embedding digital watermark into AI-generated images and audio.

Regulating AI

It is expected that the EU will also be faced with mis- and disinformation campaigns in the run-up to the European elections. However, Czech Foreign Minister Jan Lipavský has high hopes for the world’s first ground-breaking AI regulation, the EU’s AI Act, which is currently in the final stages of legislation.

“We will see more and more false content being used, which will disturb the election process, which will disturb the way how the society makes decisions,” Lipavský stated.

“I honestly like the European way. The European regulation on an AI, for example, will tackle some of those issues,” he added.

Referring to autocracies that disempower opposition by labeling and removing their content as illegal misinformation, Reeve Givens saw a potential threat in the abuse of power by states to regulate technology.

“I think there are real concerns with our world where the government gets to decide whether or not the AI is acting lawfully,” she said.

“Do you want the CEO of a tech company deciding if information should be up-ranked, or the government minister deciding if the information should be up-ranked? Neither of those solutions is ideal,” she asked.

Who is responsible? 

Regulation can address and tackle threats, but technology can also circumvent it. Cyberattacks on an election process can be executed from anywhere.

To prepare for this asymmetry, André Kudelski, CEO of the Kudelski Group, suggested using technologies to implement “more content traceability.”

With people searching for information on candidates, voter registration deadlines, and local polling stations, Matthew Prince, CEO of CloudFlare, argued that media play a central role in fact-checking and verifying content, with the tools backed up by technologists.

However, search engines and social media platforms also share the responsibility “to help surface the trusted sources of information,” Reeve Givens added, criticising some social media companies scaled back their investments in trust and safety around elections.

Google said it had dedicated itself in the run-up to polls to display government information at the top of search results, indicate polling locations on Google Maps, ensure high-quality election news on YouTube, and verify advertisers’ identities.

The multinational tech giant also intends to restrict election-related queries for which Google’s Chatbot Bard and AI-powered Search Generative Experience (SGE) will return responses, prioritising “testing for safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness.”

In August, Bard and SGE were last described in the German press as still under development and unreliable.

Multimedia platforms also form part of the strategies attackers use to radicalise society.

“Russia’s information war took place with supporting both left or right extremists just for the sake of splitting up societies,” Lipavský added.

For the Czech minister, companies need to step up their understanding of corporate social responsibility to ensure “that their tools are not misused.”

[Edited by Luca Bertuzzi/Zoran Radosavljevic]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe