Skip to main content

FEC could limit AI in political ads ahead of 2024

FEC could limit AI in political ads ahead of 2024

/

The RNC and Ron DeSantis PACs have been using the tech for months.

Share this story

A person votes in a booth at the Rios Rosas polling station in Madrid during Spain’s general election on July 23rd, 2023.
Photo by Oscar Del Pozo / AFP via Getty Images

After weeks of back and forth, the Federal Election Commission decided that it will, maybe, make rules regulating the use of AI-generated content in political ads.

At an open meeting Thursday, the FEC voted to open up the petition for public comment, kicking off a process that could result in new rules governing how campaigns use AI going into effect before the end of the year. The petition, filed by the advocacy group Public Citizen, calls on the commission to leverage its authority to punish fraud by creating rules banning candidates and political parties from using AI to misrepresent their opponents. 

“The need to regulate deepfakes and other deceptive uses of AI in election ads becomes more urgent with each passing day”

“The need to regulate deepfakes and other deceptive uses of AI in election ads becomes more urgent with each passing day,” Lisa Gilbert, Public Citizen executive vice president, said in a statement Thursday. “The FEC’s decision to proceed with a public comment period is an encouraging sign that the threat AI poses to our democracy may finally be taken seriously.”

Republican Commissioner Allen Dickerson opposed the petition when it was first filed earlier this summer but voted to start the commenting period on Thursday. Still, Dickerson questioned whether the FEC has the necessary authority to implement the requested rules, suggesting that Public Citizen and others should be demanding federal lawmakers take action first. 

“It would be news to many, I suspect, to learn that the FEC may police candidates telling lies about their opponents. That was Congress’ choice,” Dickerson said during Thursday’s meeting. “The FEC has unanimously asked it to revisit that choice and to grant us broader authority to punish fraud by campaigns, but as is Congress’ right, it has chosen to ignore that request.”

Thursday’s vote comes after a flurry of action from both Congress and the White House to regulate AI. In May, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee, saying that lawmakers should quickly pass new rules regulating the industry and specifically asking for a government licensing program. The White House also secured a handful of voluntary commitments from some of the top AI companies, like Altman’s company, to develop the technology responsibly. 

In June, Senate Majority Leader Chuck Schumer (D-NY) put out a plan for how Congress should approach regulating the industry called the SAFE Innovation Framework. The plan asks lawmakers to address a variety of risks associated with AI, from national security and job loss to copyright and misinformation.

Meanwhile, political groups like the Republican National Committee and Never Back Down, a Ron DeSantis super PAC, have already started using the tech. In April, the RNC released an AI-generated ad in response to President Joe Biden’s election announcement, painting a dystopian version of the future if he won reelection. Never Back Down used AI to mimic former President Donald Trump’s voice in an ad last month.   

These attack videos have migrated to social platforms that have similarly loose rules. In June, the DeSantis campaign shared a video containing fake images of Trump kissing Anthony Fauci, a former White House chief medical advisor who led the US’s response to covid-19.

The Democratic National Committee and the RNC have declined to comment on whether they’d issue internal rules around AI’s use in the past.

While Congress has made clear its intent to put up rules around the technology, few actual bills have been introduced. Rep. Yvette Clarke (D-NY) was the first to put out a bill requiring campaigns and political groups to disclose when they include AI-generated content in ads. Sen. Amy Klobuchar (D-MN) has introduced a companion to Clark’s measure in the Senate.

But Congress has dragged its feet on approving any rules regulating tech despite the wave of hearings, legislation, and heated statements they’ve released in recent years. With the 2024 elections approaching, Gilbert and Public Citizen issued their petition as a means of getting some rules on the books before campaign season takes off in full force. 

“This is one piece of the solution,” Gilbert said in an interview with The Verge on Wednesday. “Will will also need Congress to act on banning across the board or requiring disclosure across the board in other spaces that campaign ads are being generated.”

After Thursday’s vote, Klobuchar said that she would be introducing a new bill to increase the FEC’s authority over AI. 

“AI is increasingly being used to generate misleading content that can be used against candidates regardless of party,” Klobuchar said in a statement to The Verge. “While today’s decision is a step forward, we need the FEC to act now. I plan to introduce bipartisan legislation to make the FEC’s authority to deal with this clear, whether they already have the authority or not.”

Once the petition hits the federal register, the public will have 60 days to comment.