Skip to main content

Democrat sounds alarm over AI-generated political ads with new bill

Democrat sounds alarm over AI-generated political ads with new bill

/

After the RNC’s dystopian AI-generated attack ad, Rep. Yvette Clarke is calling for more transparency.

Share this story

Hands with additional fingers typing on a keyboard.
Image: Álvaro Bernis / The Verge

A phony AI-generated attack ad from the Republican National Committee (RNC) offered Congress a glimpse into how the tech could be used in next year’s election cycle. Now, Democrats are readying their response.

On Tuesday, Rep. Yvette Clarke (D-NY) introduced a new bill to require disclosures of AI-generated content in political ads. Clarke told The Washington Post Tuesday that her bill was a direct response to the RNC ad that launched last week. The video came out soon after President Joe Biden announced his 2024 reelection campaign, depicting a dystopian future where Biden reinstates the draft to aid Ukraine’s war effort and causes China to invade Taiwan if reelected.

“The upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads by campaigns, parties, and Super PACs. Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” Clarke said in a statement on Tuesday.

“The upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads”

The debate over whether to regulate AI and machine learning technology plagued the prior presidential election in 2020. Leading up to the election, a bogus video of then-House Speaker Nancy Pelosi slurring her words in a drunken manner went viral across social media platforms and spurred a handful of congressional hearings. Meta, TikTok, and other major social media companies later banned deepfakes, but lawmakers failed to approve any meaningful regulation as a result of their efforts. Clarke’s REAL Political Advertisements Act would apply to still-image and video ads, requiring a message at either the beginning or the end disclosing the use of AI-generated content.

With a new election cycle on the horizon, AI-generated and other doctored video content has only grown more rampant online. Over the last year, the increased accessibility of and corporate investment in AI tech has spooked lawmakers, influencing a deluge of new bills and regulatory solutions. 

Last month, Senate Majority Leader Chuck Schumer (D-NY) circulated a broad framework among experts illustrating Democrats’ approach to AI regulation. The framework proposed new rules requiring AI developers to disclose their data sources, who trained the algorithms, its audience, and an explanation for how the algorithm arrives at its responses, according to Axios.

In a March interview with The Verge, Sen. Mark Warner (D-VA) lamented how slow Congress has been to regulate emerging technologies. Speaking about social media’s potential for harm, he said, “I wish we would’ve put some rules in place ahead of the game.” He continued, “We have a chance to do that with AI.”

Without new laws, federal agencies have started to fill the gaps left by Congress. Last month, the Commerce Department asked the public how the federal government should regulate AI algorithms. The Federal Trade Commission has started warning companies against using biased AI, arguing that it already has the authority to go after them over possible discriminatory algorithms.