AI

EU lawmakers bag late night deal on ‘global first’ AI rules

Comment

eu ai act trilogue press conference
Image Credits: European Commission (opens in a new window)

After marathon ‘final’ talks which stretched to almost three days European Union lawmakers have tonight clinched a political deal on a risk-based framework for regulating artificial intelligence. The file was originally proposed back in April 2021 but it’s taken months of tricky three-way negotiations to get a deal over the line. The development means a pan-EU AI law is definitively on the way.

Giving a triumphant but exhausted press conference in the small hours of Friday night/Saturday morning local time key representatives for the European Parliament, Council and the Commission — the bloc’s co-legislators — hailed the agreement as hard fought, a milestone achievement and historic, respectively.

Taking to X to tweet the news, the EU’s president, Ursula von der Leyen — who made delivering a regulation to promote “trustworthy” AI a key priority of her term when she took up the post in late 2019 — also lauded the political agreement as a “global first”.

Prohibitions

Full details of what’s been agreed won’t be entirely confirmed until a final text is compiled and made public, which may take some weeks. But a press release put out by the European Parliament confirms the deal reached with the Council includes a total prohibition on the use of AI for:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

The use of remote biometric identification technology in public places by law enforcement has not been completely banned — but the parliament said negotiators had agreed on a series of safeguards and narrow exceptions to limit use of technologies such as facial recognition. This includes a requirement for prior judicial authorisation — and with uses limited to a “strictly defined” lists of crime.

Retrospective (non-real-time) use of remote biometric ID AIs will be limited to “the targeted search of a person convicted or suspected of having committed a serious crime”. While real-time use of this intrusive AI tech will be limited in time and location, and can only be used for the following purposes:

  • targeted searches of victims (abduction, trafficking, sexual exploitation),
  • prevention of a specific and present terrorist threat, or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).

The Council’s press release on the deal emphasizes that the provisional agreement “clarifies that the regulation does not apply to areas outside the scope of EU law and should not, in any case, affect member states’ competences in national security or any entity entrusted with tasks in this area”. It also confirms the AI act will not apply to systems exclusively for military or defence purposes.

“Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons,” the Council added.  

Civil society groups have reacted sceptically — raising concerns the agreed limitations on state agencies’ use of biometric identification technologies will not go far enough to safeguard human rights. Digital rights group EDRi, which was among those pushing for a full ban on remote biometrics, said that whilst the deal contains “some limited gains for human rights”, it looks like “a shell of the AI law Europe really needs”.

Rules for ‘high risk’ AIs, and general purpose AIs

The package agreed also includes obligations for AI systems that are classified as “high risk” owing to having “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law”.

“MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk,” the parliament wrote. “Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”

There was also agreement on a “two-tier” system of guardrails to be applied to “general” AI systems, such as the so-called foundational models underpinning the viral boom in generative AI applications like ChatGPT.

As we reported earlier, the deal reached on foundational models/general purpose AIs (GPAIs) includes some transparency requirements for what co-legislators referred to as “low tier” AIs — meaning model makers must draw up technical documentation and produce (and publish) detailed summaries about the content used for training in order to support compliance with EU copyright law. For “high-impact” GPAIs (defined as the cumulative amount of compute used for their training measured in floating point operations is greater than 10^25) with so-called “systemic risk” there are more stringent obligations.

“If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency,” the parliament wrote. “MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.”

The Commission has been working with industry on a stop-gap AI Pact for some months — and it confirmed today this is intended to plug the practice gap until the AI Act comes into force.

While foundational models/GPAIs that have been commercialized face regulation under the Act, R&D is not intended to be in scope of the law — and fully open sourced models will have lighter regulatory requirements than closed source, per today’s pronouncements.

The package agreed also promotes regulatory sandboxes and real-world-testing being established by national authorities to support startups and SMEs to develop and train AIs before placement on the market.

Penalties and entry into force

Penalties for non-compliance can lead to fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5 % of turnover, depending on the infringement and size of the company, per the parliament.

The Council’s PR further stipulates that the higher sanction (7%) would apply for violations of the banned AI applications, while penalties of 1.5% would be levied for the supply of incorrect information. Additionally, it says sanctions of 3% could be imposed for violations of other AI Act obligations but also notes that the provisional agreement allows for “more proportionate caps” on administrative fines for SMEs and start-ups in case of infringements. So there looks to be some scope for AI startups to face smaller penalties for infringements than AI giants may invite.

The deal agreed today also allows for a phased entry into force after the law is adopted — with six months allowed until rules on prohibited use cases kick in; 12 months for transparency and governance requirements; and 24 months for all other requirements. So the full force of the EU’s AI Act may not be felt until 2026.

Carme Artigas, Spain’s secretary of state for digital and AI issues, who led the Council’s negotiations on the file as the country has held the rotating Council presidency since the summer, hailed the agreement on the heavily contested file as “the biggest milestone in the history of digital information in Europe”; both for the bloc’s single digital market — but also, she suggested, “for the world”.

“We have achieved the first international regulation for artificial intelligence in the world,” she announced during a post-midnight press conference to confirm the political agreement, adding: “We feel very proud.”

The law will support European developers, startups and future scale-ups by giving them “legal certainty with technical certainty”, she predicted.

Speaking on behalf of the European Parliament, co-rapporteurs Dragoș Tudorache and Brando Benifei said their objective had been to deliver AI legislation that would ensure the ecosystem developed with a “human centric approach” which respects fundamental rights and European values.

Their assessment of the outcome was equally upbeat — citing the inclusion in the agreed text of a total ban on the use of AI for predictive policing and for biometric categorization as major wins.

“Finally we got in the right track, defending fundamental rights to the necessity that is there for our democracies to endure such incredible changes,” said Benifei, who just a few weeks ago was sounding doubtful a deal could be found. “We are the first ones in the world to have a horizontal legislation that has this direction on fundamental rights, that supports the development of AI in our continent, and that is up to date to the frontier of the artificial intelligence with the most powerful models under clear obligation. So I think we delivered.”

“We have always been questioned whether there is enough protection, whether there is enough stimulant for innovation in this text, and I can say, this balance is there,” added Tudorache. “We have safeguards, we have all the provisions that we need, the redress that we need in giving trust to our citizens in the interaction with AI, in the products in the services that they will interact with from now on.

“We now have to use this blueprint to seek now global convergence because this is a global challenge for everyone. And I think that with the work that we’ve done, as difficult as it was — and it was difficult, this was a marathon negotiation by all standards, looking at all precedents so far — but I think we delivered.”

The EU’s internal market commissioner, Thierry Breton, also chipped in with his two euro-cents — describing the agreement clinched a little before midnight Brussels’ time as “historic”. “It is a full package. It is a complete deal. And this is why we spent so much time,” he intoned. “This is balancing user safety, innovation for startups, while also respecting… our fundamental rights and our European values.”

Clear road ahead?

Despite the EU very visibly patting itself on the back tonight on securing a deal on ‘world-first’ AI rules, it’s not quite yet the end of the road for the bloc’s lawmaking process as there are still some formal steps to go — not least the final text will face votes in the parliament and the Council to adopt it. But given how much division and disagreement there has been over how (or even whether) to regulate AI the biggest obstacles have been dismantled with this political deal and the path to passing the EU AI Act in the coming months looks clear.

The Commission is certainly projecting confidence. Per Breton, work to implement the agreement starts immediately with the set up of an AI Office within the EU’s executive — which will have the job of coordinating with the Member State oversight bodies that will need to enforce the rules on AI firms; and overseeing the most advanced AI models, including by contributing to fostering standards and testing practices. A scientific panel of independent experts will be appointed to advise the AI Office about GPAI models. “We will welcome new colleagues… a lot of them,” said Breton. “We will work — starting tomorrow — to get ready.”

Opposition to the inclusion in the AI package of tiered rules for general purpose AIs has been led, in recent weeks, by France — and French AI startup Mistral, which had been lobbying for a total carve out from obligations for foundational models/GPAIs. In the event the deal agreed by the Spanish presidency does contain some obligations for GPAIs and foundation models. So it’s not the total carve out Mistral and its lobbyists have been pushing for.

Responding to news of the political deal last night, France’s digital minister’s office put out a statement attributed to Jean-Noël Barrot which said (translated from French using AI): “We will be carefully analyzing the compromise reached today, and in the coming weeks we will ensure that the text preserves Europe’s ability to develop its own artificial intelligence technologies, and safeguards its strategic autonomy.”

It remains unclear how much of a carve out Mistral’s business might enjoy under the deal agreed. Asked about this during the press conference, Artigas suggested the French AI startup would — once commercialized — be likely to fit in the “low tier” for GPAIs, meaning it would have only limited transparency obligations, since she said it does not hit the high capacity compute threshold triggering the systemic risk obligations (as she said it’s using what’s thought to be 10^23 of compute, not 10^25).

However, as Mistral is currently still in an R&D and pre-training phase for their models, she said they would be excluded from even the low tier compliance requirements.

This report was updated to include the response from the French digital ministry; link to the Council’s PR; and with additional details from the presser — including remarks about how the law might apply to Mistral. We also added details on civil society’s response 

EU ‘final’ talks to fix AI rules to run into second day — but deal on foundational models is on the table

Europe’s AI Act talks head for crunch point

More TechCrunch

Featured Article

From Yandex’s ashes comes Nebius, a ‘startup’ with plans to be a European AI compute leader

Subject to shareholder approval, Yandex N.V. is adopting the name of one of its few remaining assets, an AI cloud platform called Nebius AI which it birthed last year.

From Yandex’s ashes comes Nebius, a ‘startup’ with plans to be a European AI compute leader

Employees at Bethesda Game Studios — the Microsoft-owned game developer that produces the Elder Scrolls and Fallout franchises — are joining the Communication Workers of America. Quality assurance testers at…

Bethesda Game Studios employees form a ‘wall-to-wall’ union

This week saw one of the most widespread IT disruptions in recent years linked to a faulty software update from popular cybersecurity firm CrowdStrike. Businesses across the world reported IT…

CrowdStrike’s update fail causes global outages and travel chaos

Alphabet, the parent company of Google, is in advanced talks to acquire cybersecurity startup Wiz for $23 billion, the Wall Street Journal reported on Sunday. TechCrunch’s sources heard similar and…

Unpacking how Alphabet’s rumored Wiz acquisition could affect VC

Around 8.5 million devices — less than 1 percent Windows machines globally — were affected by the recent CrowdStrike outage, according to a Microsoft blog post by David Weston, the…

Microsoft says 8.5M Windows devices were affected by CrowdStrike outage

Featured Article

Some Black startup founders feel betrayed by Ben Horowitz’s support for Trump

Trump is an advocate for a number of policies that could be harmful to people of color.

Some Black startup founders feel betrayed by Ben Horowitz’s support for Trump

Featured Article

Strava’s next chapter: New CEO talks AI, inclusivity, and why ‘dark mode’ took so long

TechCrunch sat down with Strava’s new CEO in London for a wide-ranging interview, delving into what the company is prioritizing, and what we can expect in the future as the company embarks on its “next chapter.”

Strava’s next chapter: New CEO talks AI, inclusivity, and why ‘dark mode’ took so long

Featured Article

Lavish parties and moral dilemmas: 4 days with Silicon Valley’s MAGA elite at the RNC

All week at the RNC, I saw an event defined by Silicon Valley. But I also saw the tech elite experience flashes of discordance.

Lavish parties and moral dilemmas: 4 days with Silicon Valley’s MAGA elite at the RNC

Featured Article

Tracking the EV battery factory construction boom across North America

A wave of automakers and battery makers — foreign and domestic — have pledged to produce North American–made batteries before 2030.

Tracking the EV battery factory construction boom across North America

Featured Article

Faulty CrowdStrike update causes major global IT outage, taking out banks, airlines and businesses globally

Security giant CrowdStrike said the outage was not caused by a cyberattack, as businesses anticipate widespread disruption.

Faulty CrowdStrike update causes major global IT outage, taking out banks, airlines and businesses globally

CISA confirmed the CrowdStrike outage was not caused by a cyberattack, but urged caution as malicious hackers exploit the situation.

US cyber agency CISA says malicious hackers are ‘taking advantage’ of CrowdStrike outage

The global outage is a perfect reminder how much of the world relies on technological infrastructure.

These startups are trying to prevent another CrowdStrike-like outage, according to VCs

The CrowdStrike outage that hit early Friday morning and knocked out computers running Microsoft Windows has grounded flights globally. Major U.S. airlines including United Airlines, American Airlines and Delta Air…

CrowdStrike outage: How your plane, train and automobile travel may be affected

Prior to the ban, Trump’s team used his channel to broadcast some of his campaigns. With the ban now lifted, his channel can resume doing so.

Twitch reinstates Trump’s account ahead of the 2024 presidential election

This week, Google is in discussions to pay $23 billion for cloud security startup Wiz, SoftBank acquires Graphcore, and more.

M&A activity heats up with Wiz, Graphcore, etc.

CrowdStrike competes with a number of vendors, including SentinelOne and Palo Alto Networks but also Microsoft, Trellix, Trend Micro and Sophos, in the endpoint security market.

CrowdStrike’s rivals stand to benefit from its update fail debacle

The IT outage may have an unexpected effect on the climate: clearer skies and maybe lower temperatures this evening

CrowdStrike chaos leads to grounded aircraft — and maybe an unusual weather effect

There’s a man in Florida right now who wants to propose to his girlfriend while they’re on a beach vacation. He couldn’t get the engagement ring before he flew down…

The CrowdStrike outage is a plot point in a rom-com 

Here’s everything you need to know so far about the global outages caused by CrowdStrike’s buggy software update.

What we know about CrowdStrike’s update fail that’s causing global outages and travel chaos

This serves as an example for how easy it is to spread inaccurate information online during a time of immense global confusion and panic.

From the Sphere to false cyberattack claims, misinformation runs rampant amid CrowdStrike outage

Today is the final chance to save up to $800 on TechCrunch Disrupt 2024 tickets. Disrupt Deal Days event will end tonight at 11:59 p.m. PT. Don’t miss out on…

Last chance today: Secure major savings for TechCrunch Disrupt 2024!

Indian fintech Paytm’s struggles won’t seem to end. The company on Friday reported that its revenue declined by 36% and its loss more than doubled in the first quarter as…

Paytm loss widens and revenue shrinks as it grapples with regulatory clampdown

J. Michael Cline, the co-founder of Fandango and multiple other startups over his multi-decade career, died after falling from a Manhattan hotel, New York’s Deputy Commissioner of Public Information tells…

Fandango founder dies in fall from Manhattan skyscraper

Venture capital giant a16z fixed a security vulnerability in one of the firm’s websites after being warned by a security researcher.

Researcher finds flaw in a16z website that exposed some company data

Apple on Thursday announced its upcoming lineup of immersive video content for the Vision Pro. The list includes behind-the-scenes footage of the 2024 NBA All-Star Weekend, an immersive performance by…

Apple Vision Pro debuts immersive content featuring NBA players, The Weeknd and more

Biden centering Musk in his campaign is a notable escalation, considering he spent most of his presidency seemingly pretending the billionaire didn’t exist.

Elon Musk is now a villain in Joe Biden’s presidential campaign

Waymo would need a ground transportation permit to operate at SFO, which has yet to be approved.

Waymo wants to bring robotaxis to SFO, emails show

When Tade Oyerinde first set out to fundraise for his startup, Campus, a fully accredited online community college, it was incredibly difficult. VCs have backed for-profit education companies in the…

Why it made sense for an online community college to raise venture capital

Canadian private equity firm PartnerOne paid $28.2 million for HeadSpin, a mobile app testing startup whose founder was sentenced for fraud earlier this year, according to documents viewed by TechCrunch.…

PE firm PartnerOne paid $28M for HeadSpin, a fraction of its $1.1B valuation set by ICONIQ and Dell Technologies Capital

Meta has suspended the use of its AI assistant after Brazil’s National Data Protection Authority (ANPD) banned the company from training its AI models on personal data from Brazilians. The…

Meta puts a halt to training its generative AI tools in Brazil