Skip to main contentSkip to navigationSkip to navigation
Scarlett Johansson  at Cannes being photographed by a crowd of photographers
Scarlett Johansson had previously declined OpenAI’s approach to voice its software Photograph: Valéry Hache/Getty
Scarlett Johansson had previously declined OpenAI’s approach to voice its software Photograph: Valéry Hache/Getty

If Scarlett Johansson can’t bring the AI firms to heel, what hope for the rest of us?

This article is more than 1 month old
John Naughton

OpenAI’s unsubtle approximation of the actor’s voice for its new GPT-4o software was a stark illustration of the firm’s high-handed attitude

On Monday 13 May, OpenAI livestreamed an event to launch a fancy new product – a large language model (LLM) dubbed GPT-4o – that the company’s chief technology officer, Mira Murati, claimed to be more user-friendly and faster than boring ol’ ChatGPT. It was also more versatile, and multimodal, which is tech-speak for being able to interact in voice, text and vision. Key features of the new model, we were told, were that you could interrupt it in mid-sentence, that it had very low latency (delay in responding) and that it was sensitive to the user’s emotions.

Viewers were then treated to the customary toe-curling spectacle of “Mark and Barret”, a brace of tech bros straight out of central casting, interacting with the machine. First off, Mark confessed to being nervous, so the machine helped him to do some breathing exercises to calm his nerves. Then Barret wrote a simple equation on a piece of paper and the machine showed him how to find the value of X, after which he showed it a piece of computer code and the machine was able to deal with that too.

So far, so predictable. But there was something oddly familiar about the machine’s voice, as if it were a sultry female – called “Sky” – whose conversational repertoire spanned empathy, optimism, encouragement and perhaps even some flirty overtones. It was reminiscent of someone. But who?

It turned out that it reminded many viewers of Scarlett Johansson, the celebrated Hollywood star who provided the female voice in Spike Jonze’s 2013 film Her, which is about a guy who falls in love with his computer’s operating system. This, apparently, is the favourite movie of OpenAI’s chief executive, Sam Altman, who declared at an event in San Francisco in 2023 that the movie had resonated with him more than other sci-fi films about AI.

The person most surprised by GPT-4o’s voice, though, was Johansson herself. It turns out that Altman had approached her last September, seeking to hire her as the chatbot’s voice. “He told me,” she said in a statement, “that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives, and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said that my voice would be comforting to people.”

She declined the offer, but after the demo was livestreamed she found herself besieged by “friends, family and the general public” telling her how much GPT-4o sounded like her. And she was even more pissed off to discover that Altman had tweeted the single word “Her” on X, which she interpreted as an insinuation that the similarity between the machine’s voice and her own was intentional.

Needless to say, OpenAI vehemently denied any sharp practice. “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” an OpenAI spokesperson said in a statement that the company attributed to Altman. “We cast the voice actor behind Sky’s voice before any outreach to Ms Johansson.”

Nevertheless, the statement goes on: “Out of respect for Ms Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms Johansson that we didn’t communicate better.” Aw, shucks.

In Her, Joaquin Phoenix plays Theodore, a man who falls in love with an operating system voiced by Scarlett Johansson. Photograph: Warner Bros

Now at one level, of course, you could say that this is a storm in a champagne goblet. It’s possible that OpenAI’s newfound “respect” for Johansson may have nothing to do with the fact that she is famous and has expensive lawyers. It’s also conceivable that Altman wasn’t trolling her by tweeting “Her” when he did. Likewise, pigs may fly in close formation.

On a broader level, though, this little fracas, as tech writer Charlie Warzel put it in The Atlantic, shines a useful light on the dark heart of generative AI – a technology that is: built on theft; rationalised by three coats of prime legalistic posturing about “fair use”; and justified by a worldview which says that the hypothetical “superintelligence” tech companies are building is too big, too world-changing, too important for mundane concerns such as copyright and attribution. Warzel is right when he says that “the Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: this is happening, whether you like it or not”. To which the proper reply is: it is, and most of us don’t.

What I’ve been reading

Liberal ideal
A lovely New Statesman profile of formidable political reformer Roy Jenkins by Simon Jenkins (no relation).

Notes for notes
Technology expert Om Malik explains how he writes in an interesting interview on the People and Blogs website.

Cyber insecurity
There’s a good reflective piece in the Register on how the British Library’s communications strategy had to change after a ransomware attack.

Most viewed

Most viewed