Attackers were able to read internal chats at OpenAI

Someone penetrates the communication system, employees warn of espionage, OpenAI assumes that the attacker is a private individual.

Save to Pocket listen Print view
The OpenAI logo on the facade of the office building in San Francisco.

(Image: Shutterstock/ioda)

3 min. read
This article was originally published in German and has been automatically translated.

There is said to have been an attack on OpenAI at the beginning of the year. Someone gained access to the company's internal messaging system and stole details about the technologies. The case has only just come to light. OpenAI is said to have previously assumed that the attack posed no threat - especially not to national security. The New York Times reported on the incident. However, this also fits in with the statements made by Leopold Aschenbrenner, a former OpenAI employee, who warns of espionage by the Chinese Communist Party in an essay.

According to the New York Times, the attacker is said to have found a way into internal chats in which employees talked about their latest technological advances. There was no access to the latest technology. No data from users or partners was leaked either. According to sources from the New York Times, OpenAI assumed that it was a private individual. However, there are said to have been some employees who were at least concerned that China could also find ways to steal information - and thereby jeopardize national security.

Aschenbrenner is likely to have been one of these colleagues. He says that he warned about China and that this was one of the reasons why he was dismissed from the AI company. OpenAI denies this, saying there were other reasons. According to his essay, Aschenbrenner even sees a great danger from the "all-out war" between the USA and China. AI, he believes, will be smarter than most college graduates in just two years, and by the end of the decade we will have Artificial General Intelligence. In his mind, robots will provide enough data and most of the world's gas can be used to power AI.

Other OpenAI employees had recently left the company voluntarily. These included the co-founder, Ilya Sutskever, and the security researcher Jan Leike. One of their tasks was to develop a security concept from an ethical and moral point of view. The goal: to control a "superintelligence" whose goals are not in line with human values.

There is no concrete information on how the attacker managed to get into the communication systems, or even which systems are involved. It also remains unclear how long he was able to read along and how OpenAI knows that it was a private individual.

(emw)