Privacy

Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn

Comment

Image Credits: Ian Waldie / Staff / Getty Images

Civil society has been poring over the detail of the European Commission’s proposal for a risk-based framework for regulating applications of artificial intelligence which was proposed by the EU’s executive back in April.

The verdict of over a hundred civil society organizations is that the draft legislation falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias — and they’ve published a call for major revisions.

“We specifically recognise that AI systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, this collective statement sets out the call of 11[5] civil society organisations towards an Artificial Intelligence Act that foregrounds fundamental rights,” they write, going on to identify nine “goals” (each with a variety of suggested revisions) in the full statement of recommendations.

The Commission, which drafted the legislation, billed the AI regulation as a framework for “trustworthy”, “human-centric” artificial intelligence. However it risks veering rather closer to an enabling framework for data-driven abuse, per the civil society groups’ analysis — given the lack of the essential checks and balances to actually prevent automated harms.

Today’s statement was drafted by European Digital Rights (EDRi), Access Now, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, Fair Trials, PICUM, and ANEC — and has been signed by a full 115 not-for-profits from across Europe and beyond.

The advocacy groups are hoping their recommendations will be picked up by the European Parliament and Council as the co-legislators continue debating — and amending — the Artificial Intelligence Act (AIA) proposal ahead of any final text being adopted and applied across the EU.

Europe lays out plan for risk-based AI rules to boost trust and uptake

Key suggestions from the civil society organizations include the need for the regulation to be amended to have a flexible, future-proofed approach to assessing AI-fuelled risks — meaning it would allow for updates to the list of use-cases that are considered unacceptable (and therefore prohibited) and those that the regulation merely limits, as well as the ability to expand the (currently fixed) list of so-called “high risk” uses.

The Commission’s proposal to categorizing AI risks is too “rigid” and poorly designed (the groups’ statement literally calls it “dysfunctional”) to keep pace with fast-developing, iterating AI technologies and changing use cases for data-driven technologies, in the NGOs’ view.

“This approach of ex ante designating AI systems to different risk categories does not consider that the level of risk also depends on the context in which a system is deployed and cannot be fully determined in advance,” they write. “Further, whilst the AIA includes a mechanism by which the list of ‘high-risk’ AI systems can be updated, it provides no scope for updating ‘unacceptable’ (Art. 5) and limited risk (Art. 52) lists.

“In addition, although Annex III can be updated to add new systems to the list of high-risk AI systems, systems can only be added within the scope of the existing eight area headings. Those headings cannot currently be modified within the framework of the AIA. These rigid aspects of the framework undermine the lasting relevance of the AIA, and in particular its capacity to respond to future developments and emerging risks for fundamental rights.”

They have also called out the Commission for a lack of ambition in framing prohibited use-cases of AI — urging a “full ban” on all social scoring scoring systems; on all remote biometric identification in publicly accessible spaces (not just narrow limits on how law enforcement can use the tech); on all emotion recognition systems; on all discriminatory biometric categorisation; on all AI physiognomy; on all systems used to predict future criminal activity; and on all systems to profile and risk-assess in a migration context — arguing for prohibitions “on all AI systems posing an unacceptable risk to fundamental rights”.

On this the groups’ recommendations echo earlier calls for the regulation to go further and fully prohibit remote biometric surveillance — including from the EU’s data protection supervisor.

Ban biometric surveillance in public to safeguard rights, urge EU bodies

The civil society groups also want regulatory obligations to apply to users of high risk AI systems, not just providers (developers) — calling for a mandatory obligation on users to conduct and publish a fundamental rights impact assessment to ensure accountability around risks cannot be circumvented by the regulation’s predominant focus on providers.

After all, an AI technology that’s developed for one ostensible purpose could be applied for a different use-case that raises distinct rights risks.

Hence they want explicit obligations on users of “high risk” AIs to publish impact assessments — which they say should cover potential impacts on people, fundamental rights, the environment and the broader public interest.

“While some of the risk posed by the systems listed in Annex III comes from how they are designed, significant risks stem from how they are used. This means that providers cannot comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well,” they urge.

They also argue for transparency requirements to be extended to users of high risks systems — suggesting they should have to register the specific use of an AI system in a public database the regulation proposes to establish for providers of such system.

“The EU database for stand-alone high-risk AI systems (Art. 60) provides a promising opportunity for increasing the transparency of AI systems vis-à-vis impacted individuals and civil society, and could greatly facilitate public interest research. However, the database currently only contains information on high-risk systems registered by providers, without information on the context of use,” they write, warning: “This loophole undermines the purpose of the database, as it will prevent the public from finding out where, by whom and for what purpose(s) high-risk AI systems are actually used.”

Another recommendations addresses a key civil society criticism of the proposed framework — that it does not offer individuals rights and avenues for redress when they are negatively impacted by AI.

This marks a striking departure from existing EU data protection law — which confers a suite of rights on people attached to their personal data and — at least on paper — allows them to seek redress for breaches, as well as for third parties to seek redress on individuals’ behalf. (Moreover, the General Data Protection Regulation includes provisions related to automated processing of personal data; with Article 22 giving people subject to decisions with a legal or similar effect which are based solely on automation a right to information about the processing; and/or to request a human review or challenge the decision.)

The lack of “meaningful rights and redress” for people impacted by AI systems represents a gaping hole in the framework’s ability to guard against high risk automation scaling harms, the groups argue.

“The AIA currently does not confer individual rights to people impacted by AI systems, nor does it contain any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. As such, the AIA does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed,” they warn.

They are recommending the legislated is amended to include two individual rights as a basis for judicial remedies — namely:

  • (a) The right not to be subject to AI systems that pose an unacceptable risk or do not comply with the Act; and
  • (b) The right to be provided with a clear and intelligible explanation, in a manner that is accessible for persons with disabilities, for decisions taken with the assistance of systems within the scope of the AIA;

They also suggest a right to an “effective remedy” for those whose rights are infringed “as a result of the putting into service of an AI system”. And, as you might expect, the civil society organizations want a mechanism for public interest groups such as themselves to be able to lodge a complaint with national supervisory authorities for a breach or in relation to AI systems that undermine fundamental rights or the public interest — which they specify should trigger an investigation. (GDPR complaints simply being ignored by oversight bodies is a major problem with effective enforcement of that regime.)

Other recommendations in the groups’ statement include the need for accessibility to be considered throughout the AI system’s lifecycle, and they call out the lack of accessibility requirements in the regulation — warning that risks leading to the development and use of AI with “further barriers for persons with disabilities”; they also want explicit limits to ensure that harmonized product safety standards which the regulation proposes to delegate to private standards bodies should only cover “genuinely technical” aspects of high risks AI systems (so that political and fundamental rights decisions “remain firmly within the democratic scrutiny of EU legislators”, as they put it); and they want requirements on AI system users and providers to apply not only when the outputs are applied within the EU but also elsewhere — “to avoid risk of discrimination, surveillance, and abuse through technologies developed in the EU”.

Sustainability and environmental protection has also been overlooked, per the groups’ assessment.

On that they’re calling for “horizontal, public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems” — regardless of risk level; and covering AI system design, data management and training, application, and underlying infrastructures (hardware, data centres, etc.

The European Commission frequently justifies its aim of encouraging the update of AI by touting automation as a key technology for enabling the bloc’s sought for transition to a “climate-neutral” continent by 2050 — however AI’s own energy and resource consumption is a much overlooked component of these so-called ‘smart’ systems. Without robust environmental auditing requirements also applying to AI it’s simply PR to claim that AI will provide the answer to climate change.

The Commission has been contacted for a response to the civil society recommendations.

Last month, MEPs in the European Parliament voted to back a total ban on remote biometric surveillance technologies such as facial recognition, a ban on the use of private facial recognition databases and a ban on predictive policing based on behavioural data.

They also voted for a ban on social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality, and for a ban on AI assisting judicial decisions — another highly controversial area where automation is already been applied.

So MEPs are likely to take careful note of the civil society recommendations as they work on amendments to the AI Act.

In parallel the Council is in the process of determining its negotiating mandate on the regulation — and current proposals are pushing for a ban on social scoring by private companies but seeking carve outs for R&D and national security uses of AI.

Discussions between the Commission, Parliament and Council will determine the final shape of the regulation, although the parliament must also approve the final text of the regulation in a plenary vote — so MEPs’ views will play a key role.

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

More TechCrunch

President Joe Biden just announced that he no longer plans to seek reelection. “It has been the greatest honor of my life to serve as your President,” Biden said in…

Joe Biden drops out of presidential race

WazirX, one of India’s largest cryptocurrency exchanges, has “temporarily” suspended all trading activities on its platform days after losing about $230 million, nearly half of its reserves, in a security…

WazirX halts trading after $230 million ‘force majeure’ loss

Featured Article

From Yandex’s ashes comes Nebius, a ‘startup’ with plans to be a European AI compute leader

Subject to shareholder approval, Yandex N.V. is adopting the name of one of its few remaining assets, an AI cloud platform called Nebius AI which it birthed last year.

From Yandex’s ashes comes Nebius, a ‘startup’ with plans to be a European AI compute leader

Employees at Bethesda Game Studios — the Microsoft-owned game developer that produces the Elder Scrolls and Fallout franchises — are joining the Communication Workers of America. Quality assurance testers at…

Bethesda Game Studios employees form a ‘wall-to-wall’ union

This week saw one of the most widespread IT disruptions in recent years linked to a faulty software update from popular cybersecurity firm CrowdStrike. Businesses across the world reported IT…

CrowdStrike’s update fail causes global outages and travel chaos

Alphabet, the parent company of Google, is in advanced talks to acquire cybersecurity startup Wiz for $23 billion, the Wall Street Journal reported on Sunday. TechCrunch’s sources heard similar and…

Unpacking how Alphabet’s rumored Wiz acquisition could affect VC

Around 8.5 million devices — less than 1 percent Windows machines globally — were affected by the recent CrowdStrike outage, according to a Microsoft blog post by David Weston, the…

Microsoft says 8.5M Windows devices were affected by CrowdStrike outage

Featured Article

Some Black startup founders feel betrayed by Ben Horowitz’s support for Trump

Trump is an advocate for a number of policies that could be harmful to people of color.

Some Black startup founders feel betrayed by Ben Horowitz’s support for Trump

Featured Article

Strava’s next chapter: New CEO talks AI, inclusivity, and why ‘dark mode’ took so long

TechCrunch sat down with Strava’s new CEO in London for a wide-ranging interview, delving into what the company is prioritizing, and what we can expect in the future as the company embarks on its “next chapter.”

Strava’s next chapter: New CEO talks AI, inclusivity, and why ‘dark mode’ took so long

Featured Article

Lavish parties and moral dilemmas: 4 days with Silicon Valley’s MAGA elite at the RNC

All week at the RNC, I saw an event defined by Silicon Valley. But I also saw the tech elite experience flashes of discordance.

Lavish parties and moral dilemmas: 4 days with Silicon Valley’s MAGA elite at the RNC

Featured Article

Tracking the EV battery factory construction boom across North America

A wave of automakers and battery makers — foreign and domestic — have pledged to produce North American���made batteries before 2030.

Tracking the EV battery factory construction boom across North America

Featured Article

Faulty CrowdStrike update causes major global IT outage, taking out banks, airlines and businesses globally

Security giant CrowdStrike said the outage was not caused by a cyberattack, as businesses anticipate widespread disruption.

Faulty CrowdStrike update causes major global IT outage, taking out banks, airlines and businesses globally

CISA confirmed the CrowdStrike outage was not caused by a cyberattack, but urged caution as malicious hackers exploit the situation.

US cyber agency CISA says malicious hackers are ‘taking advantage’ of CrowdStrike outage

The global outage is a perfect reminder how much of the world relies on technological infrastructure.

These startups are trying to prevent another CrowdStrike-like outage, according to VCs

The CrowdStrike outage that hit early Friday morning and knocked out computers running Microsoft Windows has grounded flights globally. Major U.S. airlines including United Airlines, American Airlines and Delta Air…

CrowdStrike outage: How your plane, train and automobile travel may be affected

Prior to the ban, Trump’s team used his channel to broadcast some of his campaigns. With the ban now lifted, his channel can resume doing so.

Twitch reinstates Trump’s account ahead of the 2024 presidential election

This week, Google is in discussions to pay $23 billion for cloud security startup Wiz, SoftBank acquires Graphcore, and more.

M&A activity heats up with Wiz, Graphcore, etc.

CrowdStrike competes with a number of vendors, including SentinelOne and Palo Alto Networks but also Microsoft, Trellix, Trend Micro and Sophos, in the endpoint security market.

CrowdStrike’s rivals stand to benefit from its update fail debacle

The IT outage may have an unexpected effect on the climate: clearer skies and maybe lower temperatures this evening

CrowdStrike chaos leads to grounded aircraft — and maybe an unusual weather effect

There’s a man in Florida right now who wants to propose to his girlfriend while they’re on a beach vacation. He couldn’t get the engagement ring before he flew down…

The CrowdStrike outage is a plot point in a rom-com 

Here’s everything you need to know so far about the global outages caused by CrowdStrike’s buggy software update.

What we know about CrowdStrike’s update fail that’s causing global outages and travel chaos

This serves as an example for how easy it is to spread inaccurate information online during a time of immense global confusion and panic.

From the Sphere to false cyberattack claims, misinformation runs rampant amid CrowdStrike outage

Today is the final chance to save up to $800 on TechCrunch Disrupt 2024 tickets. Disrupt Deal Days event will end tonight at 11:59 p.m. PT. Don’t miss out on…

Last chance today: Secure major savings for TechCrunch Disrupt 2024!

Indian fintech Paytm’s struggles won’t seem to end. The company on Friday reported that its revenue declined by 36% and its loss more than doubled in the first quarter as…

Paytm loss widens and revenue shrinks as it grapples with regulatory clampdown

J. Michael Cline, the co-founder of Fandango and multiple other startups over his multi-decade career, died after falling from a Manhattan hotel, New York’s Deputy Commissioner of Public Information tells…

Fandango founder dies in fall from Manhattan skyscraper

Venture capital giant a16z fixed a security vulnerability in one of the firm’s websites after being warned by a security researcher.

Researcher finds flaw in a16z website that exposed some company data

Apple on Thursday announced its upcoming lineup of immersive video content for the Vision Pro. The list includes behind-the-scenes footage of the 2024 NBA All-Star Weekend, an immersive performance by…

Apple Vision Pro debuts immersive content featuring NBA players, The Weeknd and more

Biden centering Musk in his campaign is a notable escalation, considering he spent most of his presidency seemingly pretending the billionaire didn’t exist.

Elon Musk is now a villain in Joe Biden’s presidential campaign

Waymo would need a ground transportation permit to operate at SFO, which has yet to be approved.

Waymo wants to bring robotaxis to SFO, emails show

When Tade Oyerinde first set out to fundraise for his startup, Campus, a fully accredited online community college, it was incredibly difficult. VCs have backed for-profit education companies in the…

Why it made sense for an online community college to raise venture capital