Skip to main content

Former OpenAI employees say whistleblower protection on AI safety is not enough

Former OpenAI employees say whistleblower protection on AI safety is not enough

/

Thirteen former OpenAI employees said in an open letter that AI companies have retaliated against people who speak out around safety concerns.

Share this story

Illustration of a pixel block brain.
Illustration: The Verge

Several former OpenAI employees warned in an open letter that advanced AI companies like OpenAI stifle criticism and oversight, especially as concerns over AI safety have increased in the past few months. 

The open letter, signed by 13 former OpenAI employees (six of whom chose to remain anonymous) and endorsed by “Godfather of AI” Geoffrey Hinton, formerly of Google, says that in the absence of any effective government oversight, AI companies should commit to open criticism principles. These principles include avoiding the creation and enforcement of non-disparagement clauses, facilitating a “verifiably” anonymous process to report issues, allowing current and former employees to raise concerns to the public, and not retaliating against whistleblowers. 

The letter says that while they believe in AI’s potential to benefit society, they also see risks, such as the entrenchment of inequalities, manipulation and misinformation, and the possibility of human extinction. While there are important concerns about a machine that could take over the planet, today’s generative AI has more down-to-earth problems, such as copyright violations, the inadvertent sharing of problematic and illegal images, and concerns it can mimic peoples’ likenesses and mislead the public. 

The letter’s signees claim current whistleblower protections “are insufficient” because they focus on illegal activity rather than concerns that, they say, are mostly unregulated. The Department of Labor states workers reporting violations of wages, discrimination, safety, fraud, and withholding of time off are protected by whistleblower protection laws, which means employers cannot fire, lay off, reduce hours, or demote whistleblowers. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues,” the letter reads. 

AI companies, particularly OpenAI, have been criticized for inadequate safety oversight. Google defended its use of AI Overviews in Search even after people claimed it was giving people dangerous, though hilarious, results. Microsoft was also under fire for its Copilot Designer, which was generating “sexualized images of women in violent tableaus.”

Recently, several OpenAI researchers resigned after the company disbanded its “Superalignment” team, which focused on addressing AI’s long-term risks, and the departure of co-founder Ilya Sutskever, who had been championing safety in the company. One former researcher, Jan Leike, said that “safety culture and processes have taken a backseat to shiny products” at OpenAI. 

OpenAI does have a new safety team, one that is led by CEO Sam Altman.