What UK firms need to know about the EU’s Artificial Intelligence Act

This landmark legislation, recently approved by the European Parliament, is set to come into force gradually over the next two years. How will it affect businesses beyond the bloc?

EU flags flying outside of the European Parliament in Brussels

Ever since ChatGPT shook up the business world in Q4 2022, firms have been racing to use AI, but regulators are catching up fast in their bid to ensure that any such application is safe and trustworthy.

In March, for instance, the European Parliament signed off the EU Artificial Intelligence Act. Pending final checks, the legislation should be adopted before the parliamentary election in June, with its provisions taking effect in stages over 24 months. It amounts to the world’s first major set of statutory standards governing the use of AI.

“With the growing presence of AI in all aspects of daily life, legal frameworks have become urgently needed to regulate its uses and protect data,” says Neil Thacker, CISO at cybersecurity firm Netskope in EMEA. 

He adds that one of the main objectives of the new legislation is to “strike the right balance of enabling innovation while respecting ethical principles”. As part of this effort, the act splits AI systems into different risk categories governed by requirements of varying stringency. 

It will also apply to any system that touches, or otherwise interacts with, consumers in the EU. That means it could have a broad extraterritorial impact. A British company using AI to analyse data that’s then sent to a European client, for instance, would be covered by the legislation. 

“The act is wide-ranging, trying to provide guidance and protection across the multitude of areas that AI will affect in the coming years,” Thacker says.

How onerous are the Act’s requirements?

The main concern for UK business leaders is how onerous the new law is likely to be for their firms. For many, the EU’s previous big statutory intervention – the General Data Protection Regulation – has cast a long shadow since taking effect in 2018. Remembering the paperwork this required and the many changes they had to make to ensure compliance, they’re understandably worried that the new legislation could impose similar bureaucratic burdens, which might prove costly.

Fear not, says Michael Veale, associate professor at University College London’s faculty of laws, who has been poring over its small print. 

Many of its provisions are “quite straightforward and imaginable”, he says. These include “making sure that your system is secure and not biased in ways that are undesirable, and that any human overseeing it can do so appropriately and robustly”. 

In theory, such requirements shouldn’t be too taxing, according to Veale. 

“They echo a variety of the very basic demands on AI systems in recent years,” he explains. “While it may be difficult to interpret them in every single context, they aren’t particularly onerous or revolutionary.”

A focus on high-risk systems

One of the most fundamental questions for any UK firm to ask itself is whether it’s selling high-risk systems into the EU, says Veale, who notes that the vast majority won’t be. In any case, the few that are “should be looking at the standards and making sure they’re following them anyway”. 

The act is wide-ranging, trying to provide guidance and protection across the multitude of areas that AI will affect

The EU won’t be assessing companies and certifying them as compliant, so expect third-party industry-led standards bodies to self-police, with the regulators stepping in only if needed, he adds. 

There are certain aspects of the act that “average non-specialist businesses should know”, so that they can take steps to ensure compliance, according to Thacker. 

“Initially, they should heed its references to general-purpose AI systems,” he advises. “The new law includes transparency requirements including technical documentation and compliance with EU copyright laws. Where such information is not available, businesses will be required to control the internal use of such systems.” 

Thacker points out that the legislation includes explicit requirements for detailed summaries about the content used in training any general-purpose AI systems.

When you need to worry

Companies specialising in areas that the legislation deems “high risk” will need to be particularly attentive to its terms. That’s not only because of the more stringent requirements that will apply to them. It’s also because they’ll have relatively little time to ensure compliance. While most organisations whose activities are covered by the act will have two years to implement any required changes, the deadline is tighter for makers of high-risk systems.

Most applications identified as high risk by the act are those that public sector organisations would use for purposes such as education, the management of critical infrastructure or the allocation of emergency services.

Any UK firm selling AI products for such purposes would need to register these in a centralised database and undergo the same certification process that applies to any EU counterpart. 

Beyond that, all businesses would be wise to audit their systems and use of AI more regularly and thoroughly. This should help them to prepare for any further statutory changes in this extremely fast-moving field. 

The EU’s act is the first legislative effort of note to lasso a constantly evolving technology that, in its current form as an advanced chatbot, is barely 18 months old. The situation could easily change radically long before this law’s final provisions are due to take effect. It’s therefore vital for businesses to keep abreast of AI developments as a matter of course, Thacker stresses. 

“Knowing and documenting the usage of both machine learning and AI systems in an organisation”, he says, “is a simple way to understand and anticipate vulnerabilities to business-critical data and ensure the responsible application of Al.”