Privacy

EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

Comment

OpenAI and ChatGPT logos
Image Credits: Didem Mente/Anadolu Agency / Getty Images

A data protection taskforce that’s spent over a year considering how the European Union’s data protection rulebook applies to OpenAI’s viral chatbot, ChatGPT, reported preliminary conclusions Friday. The top-line takeaway is that the working group of privacy enforcers remains undecided on crux legal issues, such as the lawfulness and fairness of OpenAI’s processing.

The issue is important as penalties for confirmed violations of the bloc’s privacy regime can reach up to 4% of global annual turnover. Watchdogs can also order non-compliant processing to stop. So — in theory — OpenAI is facing considerable regulatory risk in the region at a time when dedicated laws for AI are thin on the ground (and, even in the EU’s case, years away from being fully operational).

But without clarity from EU data protection enforcers on how current data protection laws apply to ChatGPT, it’s a safe bet that OpenAI will feel empowered to continue business as usual — despite the existence of a growing number of complaints its technology violates various aspects of the bloc’s General Data Protection Regulation (GDPR).

For example, this investigation from Poland’s data protection authority (DPA) was opened following a complaint about the chatbot making up information about an individual and refusing to correct the errors. A similar complaint was recently lodged in Austria.

Lots of GDPR complaints, a lot less enforcement

On paper, the GDPR applies whenever personal data is collected and processed — something large language models (LLMs) like OpenAI’s GPT, the AI model behind ChatGPT, are demonstrably doing at vast scale when they scrape data off the public internet to train their models, including by syphoning people’s posts off social media platforms.

The EU regulation also empowers DPAs to order any non-compliant processing to stop. This could be a very powerful lever for shaping how the AI giant behind ChatGPT can operate in the region if GDPR enforcers choose to pull it.

Indeed, we saw a glimpse of this last year when Italy’s privacy watchdog hit OpenAI with a temporary ban on processing the data of local users of ChatGPT. The action, taken using emergency powers contained in the GDPR, led to the AI giant briefly shutting down the service in the country.

ChatGPT only resumed in Italy after OpenAI made changes to the information and controls it provides to users in response to a list of demands by the DPA. But the Italian investigation into the chatbot, including crux issues like the legal basis OpenAI claims for processing people’s data to train its AI models in the first place, continues. So the tool remains under a legal cloud in the EU.

Under the GDPR, any entity that wants to process data about people must have a legal basis for the operation. The regulation sets out six possible bases — though most are not available in OpenAI’s context. And the Italian DPA already instructed the AI giant it cannot rely on claiming a contractual necessity to process people’s data to train its AIs — leaving it with just two possible legal bases: either consent (i.e. asking users for permission to use their data); or a wide-ranging basis called legitimate interests (LI), which demands a balancing test and requires the controller to allow users to object to the processing.

Since Italy’s intervention, OpenAI appears to have switched to claiming it has a LI for processing personal data used for model training. However, in January, the DPA’s draft decision on its investigation found OpenAI had violated the GDPR. Although no details of the draft findings were published so we have yet to see the authority’s full assessment on the legal basis point. A final decision on the complaint remains pending.

A precision ‘fix’ for ChatGPT’s lawfulness?

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts.

The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data.

On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data. (“In order to rely on the exception laid down in Article 9(2)(e) GDPR, it is important to ascertain whether the data subject had intended, explicitly and by a clear affirmative action, to make the personal data in question accessible to the general public,” it writes on this.)

To rely on LI as its legal basis in general, OpenAI needs to demonstrate it needs to process the data; the processing should also be limited to what is necessary for this need; and it must undertake a balancing test, weighing its legitimate interests in the processing against the rights and freedoms of the data subjects (i.e. people the data is about).

Here, the taskforce has another suggestion, writing that “adequate safeguards” — such as “technical measures”, defining “precise collection criteria” and/or blocking out certain data categories or sources (like social media profiles), to allow for less data to be collected in the first place to reduce impacts on individuals — could “change the balancing test in favor of the controller”, as it puts it.

This approach could force AI companies to take more care about how and what data they collect to limit privacy risks.

“Furthermore, measures should be in place to delete or anonymise personal data that has been collected via web scraping before the training stage,” the taskforce also suggests.

OpenAI is also seeking to rely on LI for processing ChatGPT users’ prompt data for model training. On this, the report emphasizes the need for users to be “clearly and demonstrably informed” such content may be used for training purposes — noting this is one of the factors that would be considered in the balancing test for LI.

It will be up to the individual DPAs assessing complaints to decide if the AI giant has fulfilled the requirements to actually be able to rely on LI. If it can’t, ChatGPT’s maker would be left with only one legal option in the EU: asking citizens for consent. And given how many people’s data is likely contained in training data-sets it’s unclear how workable that would be. (Deals the AI giant is fast cutting with news publishers to license their journalism, meanwhile, wouldn’t translate into a template for licensing European’s personal data as the law doesn’t allow people to sell their consent; consent must be freely given.)

Fairness & transparency aren’t optional

Elsewhere, on the GDPR’s fairness principle, the taskforce’s report stresses that privacy risk cannot be transferred to the user, such as by embedding a clause in T&Cs that “data subjects are responsible for their chat inputs”.

“OpenAI remains responsible for complying with the GDPR and should not argue that the input of certain personal data was prohibited in first place,” it adds.

On transparency obligations, the taskforce appears to accept OpenAI could make use of an exemption (GDPR Article 14(5)(b)) to notify individuals about data collected about them, given the scale of the web scraping involved in acquiring data-sets to train LLMs. But its report reiterates the “particular importance” of informing users their inputs may be used for training purposes.

The report also touches on the issue of ChatGPT ‘hallucinating’ (making information up), warning that the GDPR “principle of data accuracy must be complied with” — and emphasizing the need for OpenAI to therefore provide “proper information” on the “probabilistic output” of the chatbot and its “limited level of reliability”.

The taskforce also suggests OpenAI provides users with an “explicit reference” that generated text “may be biased or made up”.

On data subject rights, such as the right to rectification of personal data — which has been the focus of a number of GDPR complaints about ChatGPT — the report describes it as “imperative” people are able to easily exercise their rights. It also observes limitations in OpenAI’s current approach, including the fact it does not let users have incorrect personal information generated about them corrected, but only offers to block the generation.

However the taskforce does not offer clear guidance on how OpenAI can improve the “modalities” it offers users to exercise their data rights — it just makes a generic recommendation the company applies “appropriate measures designed to implement data protection principles in an effective manner” and “necessary safeguards” to meet the requirements of the GDPR and protect the rights of data subjects”. Which sounds a lot like ‘we don’t know how to fix this either’.

ChatGPT GDPR enforcement on ice?

The ChatGPT taskforce was set up, back in April 2023, on the heels of Italy’s headline-grabbing intervention on OpenAI, with the aim of streamlining enforcement of the bloc’s privacy rules on the nascent technology. The taskforce operates within a regulatory body called the European Data Protection Board (EDPB), which steers application of EU law in this area. Although it’s important to note DPAs remain independent and are competent to enforce the law on their own patch where GDPR enforcement is decentralized.

Despite the indelible independence of DPAs to enforce locally, there is clearly some nervousness/risk aversion among watchdogs about how to respond to a nascent tech like ChatGPT.

Earlier this year, when the Italian DPA announced its draft decision, it made a point of noting its proceeding would “take into account” the work of the EDPB taskforce. And there other signs watchdogs may be more inclined to wait for the working group to weigh in with a final report — maybe in another year’s time — before wading in with their own enforcements. So the taskforce’s mere existence may already be influencing GDPR enforcements on OpenAI’s chatbot by delaying decisions and putting investigations of complaints into the slow lane.

For example, in a recent interview in local media, Poland’s data protection authority suggested its investigation into OpenAI would need to wait for the taskforce to complete its work.

The watchdog did not respond when we asked whether it’s delaying enforcement because of the ChatGPT taskforce’s parallel workstream. While a spokesperson for the EDPB told us the taskforce’s work “does not prejudge the analysis that will be made by each DPA in their respective, ongoing investigations”. But they added: “While DPAs are competent to enforce, the EDPB has an important role to play in promoting cooperation between DPAs on enforcement.”

As it stands, there looks to be a considerable spectrum of views among DPAs on how urgently they should act on concerns about ChatGPT. So, while Italy’s watchdog made headlines for its swift interventions last year, Ireland’s (now former) data protection commissioner, Helen Dixon, told a Bloomberg conference in 2023 that DPAs shouldn’t rush to ban ChatGPT — arguing they needed to take time to figure out “how to regulate it properly”.

It is likely no accident that OpenAI moved to set up an EU operation in Ireland last fall. The move was quietly followed, in December, by a change to its T&Cs — naming its new Irish entity, OpenAI Ireland Limited, as the regional provider of services such as ChatGPT — setting up a structure whereby the AI giant was able to apply for Ireland’s Data Protection Commission (DPC) to become its lead supervisor for GDPR oversight.

This regulatory-risk-focused legal restructuring appears to have paid off for OpenAI as the EDPB ChatGPT taskforce’s report suggests the company was granted main establishment status as of February 15 this year — allowing it to take advantage of a mechanism in the GDPR called the One-Stop Shop (OSS), which means any cross border complaints arising since then will get funnelled via a lead DPA in the country of main establishment (i.e., in OpenAI’s case, Ireland).

While all this may sound pretty wonky it basically means the AI company can now dodge the risk of further decentralized GDPR enforcement — like we’ve seen in Italy and Poland — as it will be Ireland’s DPC that gets to take decisions on which complaints get investigated, how and when going forward.

The Irish watchdog has gained a reputation for taking a business-friendly approach to enforcing the GDPR on Big Tech. In other words, ‘Big AI’ may be next in line to benefit from Dublin’s largess in interpreting the bloc’s data protection rulebook.

OpenAI was contacted for a response to the EDPB taskforce’s preliminary report but at press time it had not responded.

Responding to the EDPB’s report, after we queried the suggestion that OpenAI can now avail itself of the GDPR’s OSS, Maciej Gawronski of the law firm GP Partners — which is representing complainant behind the Polish ChatGPT GDPR investigation, told TechCrunch: “We have not been provided by anyone with any information which would suggest that OpenAI’s EU office has any powers to take ‘decisions on the purposes and means of the processing of personal data’ in the meaning of Article 4 point 16 letter a) of the GDPR.” 

“Given the centralised nature of ChatGPT service it is impossible to have headquarters in the US and personal data processing headquarters in the EU,” he added. “On top of that, I’ve just checked my May 24 invoice from OpenAI for using ChatGPT. It is issued by Open AI LLC, SF, CAL, US.”

In further remarks, Gawronski described the EDPB report as “enigmatic and shallow”, suggesting it reads as if it was “drafted by the Irish [DPC]”. “It seems like EDPB is trying hard to help OpenAI to look as compliant possible,” he added. “We are still of the opinion that UODO [Polish DPA] has competence and obligation to examine and decide our complaint.”

This report was updated with additional comment

More TechCrunch

Huffington Post founder Arianna Huffington and OpenAI CEO Sam Altman are throwing their weight behind a new venture, Thrive AI Health, that aims to build AI-powered assistant tech to promote…

OpenAI Startup Fund backs AI healthcare venture with Arianna Huffington

The essential labor of data work, like moderation and annotation, is systematically hidden from those who benefit from the fruits of that labor. A new project puts the lived experiences…

Data workers detail exploitation by tech industry in DAIR report

Hello and welcome back to TechCrunch Space. I hope everyone had a great Independence Day. On to the news!

TechCrunch Space: SpaceX’s big plans for Starship in Florida

Featured Article

Valuations of startups have quietly rebounded to all-time highs. Some investors say the slump is over. 

Generative AI businesses aside, the last couple of years have been relatively difficult for venture-backed companies. Very few startups were able to raise funding at prices that exceeded their previous valuations.   Now, approximately two years after the venture slump began in early 2022, some investors, like IVP general partner Tom…

7 hours ago
Valuations of startups have quietly rebounded to all-time highs. Some investors say the slump is over. 

VPN makers report having received a notification from Apple that their apps have been removed from the App Store in Russia.

Apple removes VPN apps at request of Russian authorities, say app makers

Europe’s next-generation launch vehicle, the Ariane 6, is poised to lift off for the first time tomorrow, as the continent looks to build out sovereign access to space and ensure…

Ariane 6 is the future of European heavy-lift launch — for better or worse

Over the past few days, Ghost says it has achieved two major milestones in its move to become a federated service.

Substack rival Ghost federates its first newsletter

The Samsung event will feature updates to the Galaxy Z Fold, Galaxy Z Flip, as well as more details on the Galaxy Ring and Galaxy AI.

Samsung Unpacked 2024: What we expect and how to watch Wednesday’s hardware event

Amazon has released an all-new version of its Echo Spot ahead of Prime Day, the company announced on Monday. The 2024 version of the Alexa-enabled smart alarm clock costs $79.99,…

Amazon revives its Echo Spot with an upgraded look and improved audio

One of the vendors to benefit from the database boom is Tembo, a startup creating a platform that lets developers deploy different flavors of Postgres.

Tembo capitalizes on the database boom and lands new cash to expand

TechCrunch Disrupt 2024 is set to welcome an impressive lineup of judges for the Startup Battlefield 200 competition, presented this year by Google Cloud. These judges will decide which company…

Mayfield’s Navin Chaddha is coming to TechCrunch Disrupt 2024

Numerous concerns are weighing on the minds of many, whether it’s current global conflicts, climate change or the precarious state of the economy, it is no surprise that the world…

Art therapy app Scribble Journey lets you express emotions through doodles

Pestle addresses the common problem of finding recipes on the web.

Pestle’s app can now save recipes from Reels using on-device AI

These efforts have come as Lucid is looking to start building its Gravity SUV by the end of this year.

Lucid Motors sets new record for EV deliveries as it seeks ‘escape velocity’

Berlin-based food delivery giant Delivery Hero has warned investors it may “ultimately” face an antitrust fine of up to €400 million. The development, reported earlier by Reuters, follows unannounced raids…

Delivery Hero warns it could face €400M antitrust fine

Featured Article

Investors chase wealth tech startups in India as affluent class grows

The high-net-worth and ultra-high-net-worth segments are booming in India, prompting some wealth management firms to aggressively expand their relationship manager networks to capture this market.

1 day ago
Investors chase wealth tech startups in India as affluent class grows

Featured Article

Seed VCs are turning to new ‘pro rata’ funds that help them compete with the big firms

Three companies with new funds deploy capital to support seed and Series A VCs looking to exercise their pro rata rights.

1 day ago
Seed VCs are turning to new ‘pro rata’ funds that help them compete with the big firms

Here are the latest companies venturing into the gaming scene and details about each offering, including pricing, examples of titles and supported devices. 

YouTube and LinkedIn have games now, and here’s how you can play them

Featured Article

CIOs’ concerns over generative AI echo those of the early days of cloud computing

CIOs trying to govern generative AI have the same concerns they had about cloud computing 15 years ago, but they’ve learned some things along the way.

1 day ago
CIOs’ concerns over generative AI echo those of the early days of cloud computing

It sounds like the latest dispute between Apple and Fortnite-maker Epic Games isn’t over. Epic has been fighting Apple for years over the company’s revenue-sharing requirements in the App Store.…

Epic Games CEO promises to ‘fight’ Apple over ‘absurd’ changes

As deep-pocketed companies like Amazon, Google and Walmart invest in and experiment with drone delivery, a phenomenon reflective of this modern era has emerged. Drones, carrying snacks and other sundries,…

What happens if you shoot down a delivery drone?

A police officer pulled over a self-driving Waymo vehicle in Phoenix after it ran a red light and pulled into a lane of oncoming traffic, according to dispatch records. The…

Waymo robotaxi pulled over by Phoenix police after driving into the wrong lane

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. This week, Figma CEO Dylan…

Figma pauses its new AI feature after Apple controversy

We’ve created this guide to help parents navigate the controls offered by popular social media companies.

How to set up parental controls on Facebook, Snapchat, TikTok and more popular sites

Featured Article

You could learn a lot from a CIO with a $17B IT budget

Lori Beer’s work is a case study for every CIO out there, most of whom will never come close to JP Morgan Chase’s scale, but who can still learn from how it goes about its business.

2 days ago
You could learn a lot from a CIO with a $17B IT budget

For the first time, Chinese government workers will be able to purchase Tesla’s Model Y for official use. Specifically, officials in eastern China’s Jiangsu province included the Model Y in…

Tesla makes it onto Chinese government purchase list

Generative AI models don’t process text the same way humans do. Understanding their “token”-based internal environments may help explain some of their strange behaviors — and stubborn limitations. Most models,…

Tokens are a big reason today’s generative AI falls short

After multiple rejections, Apple has approved Fortnite maker Epic Games’ third-party app marketplace for launch in the EU. As now permitted by the EU’s Digital Markets Act (DMA), Epic announced…

Apple approves Epic Games’ marketplace app after initial rejections

There’s no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAI’s systems. The hack itself, while troubling, appears to have been superficial…

OpenAI breach is a reminder that AI companies are treasure troves for hackers

Welcome to Startups Weekly — TechCrunch’s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Most…

Space for newcomers, biotech going mainstream, and more