🎉 AI Engineers: Join our webinar: Getting started with RAG chatbots on the 18th July. Secure your spot

June 27, 2024 - last updated
Artificial Intelligence

The EU AI Act: What you need to know

The first Artificial Intelligence Act (AIA) in history, a legislative framework governing the sale and application of AI within the EU, is near completion by the European Union (EU). Scheduled for adoption in early 2025, this act guarantees AI systems’ secure and responsible use, especially in domains where AI could threaten fundamental rights. 

According to our 2024 AI & ML Report, 89% percent of engineers say their AI experiences hallucinations on a daily/weekly basis — stressing the importance of ensuring AI-generated content is accurate and trustworthy.

The EU AI Act will significantly impact the AI sector and technology firms, making it crucial to grasp its fundamental principles and strategies.

What is the EU AI Act?

The EU AI Act is a regulatory framework that aims to promote responsible AI development while mitigating associated risks. Some of its primary characteristics include precisely describing AI systems, categorizing them based on risk assessments, and mandating openness, traceability, and human oversight. It addressed potential hazards in critical areas such as healthcare, public organizations, education, and border monitoring.

The Act places strict demands on AI developers, guaranteeing transparency and accountability through the AI lifecycle. The law encourages AI’s moral and reliable application to protect consumers and increase public confidence in these technologies. 

Furthermore, the law explains that tech companies developing AI technologies must disclose the information used in model training, report on data usage during training, and conduct risk assessments to meet EU AI Act standards. These measures emphasize human oversight to mitigate hallucinations, profiling, biases, and negative outcomes, avoiding reliance solely on automated processes.

EU AI Act

Importance of the EU AI Act 

The legislation will greatly impact AI industry companies since they must abide by its requirements, which strongly emphasize risk mitigation, transparency, and traceability. While businesses are encouraged to voluntarily abide by the regulations in 2024, non-compliance upon enforcement may result in substantial fines. 

The EU AI Act is significant because it can influence AI regulation domestically and internationally, as other countries may use it as a template for their laws.

These requirements will deeply affect the AI sector and tech firms, compelling them to ensure compliance with the act’s standards. While the Act introduces regulatory burdens, it also offers opportunities for GenAI developers to demonstrate their commitment to responsible AI deployment. 

Risk levels in the EU AI Act 

The AI Act introduces regulations that vary based on the AI system’s risk level, mandating specific obligations for both providers and users. It’s crucial to assess the risk level of AI systems, as their impact can range from minimal to severe.

Unacceptable Risk

AI systems posing unacceptable risks will face prohibition due to their potential harm. This category includes:

  • Manipulative AI targeting cognitive behavior, especially in vulnerable groups or children, like voice-activated toys promoting hazardous actions.
  • Social scoring systems that evaluate individuals based on personal traits, behavior, or socio-economic status.
  • Biometric identification tools, including real-time and remote systems like facial recognition.

Exceptions exist for law enforcement in grave situations, with stringent conditions for “real-time” and “post-event” biometric identification, the latter subject to court approval for serious crime investigations.

High Risk

AI systems that could jeopardize safety or fundamental rights are labeled high risk. They fall into two groups:

  • AI in EU-regulated safety products, such as toys, vehicles, medical devices, and more.
  • AI in critical sectors requiring EU database registration, covering areas like critical infrastructure, education, employment, essential services, law enforcement, and legal assistance.

These high-risk AI systems undergo pre-market assessments and continuous lifecycle evaluations.

Generative AI

Generative AI, including models like ChatGPT, must ensure transparency by:

  • Acknowledging AI-generated content.
  • Preventing illegal content generation.
  • Summarizing copyrighted material used in training.

Advanced general-purpose models, like GPT-4, require in-depth evaluations and incident reporting to the European Commission for systemic risks.

Limited Risk

AI systems with limited risk need to meet basic transparency standards, enabling informed user decisions. This includes awareness of AI interaction, particularly with AI-generated or altered media content, to help users decide on continued usage.

The EU AI Act imposes prohibitions on various AI applications, categorized based on their level of risk. Compliance with these regulations is crucial for companies to avoid fines outlined in the legislation.

The risk of non-compliance with the AI Act

Non-compliance with the EU AI Act poses significant risks, including hefty fines that can amount to millions of euros or a percentage of a company’s annual global turnover, reflecting the seriousness of breaches. Additionally, failure to adhere to the regulations can damage a company’s reputation, erode public trust, and result in restrictions or bans on operating within the European Union, one of the world’s largest and most lucrative markets.

The risk of non-compliance with the AI Act

Addressing risk with compliance

The EU AI Act prioritizes managing risks linked with AI systems, such as “hallucinations” where AI generates inaccurate outputs. It mandates transparency and traceability for data to minimize these risks. High-risk AI systems face strict rules, like risk management and conformity assessments. The Act aims to boost trust in AI’s safety and reliability by tackling the possibility of AI producing false outputs. 

AI security

The EU AI Act heavily overlaps with AI security measures to improve the overall security posture of AI systems. The AI Act regulation mitigates potential breaches and harmful attacks by mandating developers establish strict security processes and guarantee data privacy. 

Additionally, the Act encourages proactive steps to reduce cybersecurity risks throughout the AI lifecycle by promoting secure-by-design principles. The act also highlights the importance of adhering to AI security protocols, demonstrating the EU’s dedication to encouraging AI’s ethical and secure application in various contexts.

The state of AI compliance and regulations

Effective compliance techniques and a thorough understanding of the actual consequences of the EU AI Act are vital for successfully navigating the regulatory environment it has built. 

Effective compliance techniques include monitoring compliance requirements and encouraging a culture of ethical artificial intelligence development, highlighted by AI security and guardrails. Organizations can confidently manage the complexity of AI compliance by taking proactive steps and remaining up-to-date on regulatory revisions.

Looking at AI regulation around the world

The EU AI Act significantly advances AI compliance and regulations, with potential global implications. Unlike UK and US regulations, it establishes a thorough framework focusing on transparency, traceability, and risk management for AI systems. 

Here are some recent regulatory actions in the US, UK, China, South Korea, India, Canada, and Australia:

United States

In October 2023, President Biden took a significant step toward shaping the future of Artificial Intelligence in the US. His Executive Order “On Safe, Secure, and Trustworthy Artificial Intelligence” lays out a comprehensive framework aimed at maximizing the benefits of AI while minimizing its potential risks.

This order goes beyond just technological advancements. It emphasizes ethical considerations, requiring developers of powerful AI systems to be transparent about potential risks and share testing results with the government. This ensures proactive safety measures are in place.

United Kingdom

The UK has been active in AI regulation, focusing on promoting innovation while ensuring safety and accountability. The UK government has published various guidelines and strategies for AI governance, including the “Regulation of Artificial Intelligence” report by the House of Lords Select Committee on Artificial Intelligence.

China

China has been a significant player in AI regulation, with a focus on promoting innovation and economic growth while ensuring ethical and responsible AI development. The country has implemented various measures to govern AI, including the “New Generation Artificial Intelligence Development Plan” and the “Beijing AI Principles.”

South Korea

South Korea has been proactive in AI regulation, aiming to foster AI innovation while addressing ethical and social implications. The country has established the “AI Ethics and Safety Management System” to promote responsible AI development.

India

India has been working on AI regulation to balance innovation and ethical AI development. The country has released the “National Strategy for Artificial Intelligence” and the “National AI Portal” to coordinate AI-related initiatives and governance.

Canada

Canada has been at the forefront of AI regulation, emphasizing responsible and ethical AI development. The country has introduced the “Directive on Automated Decision-Making” and the “National AI Strategy” to govern the use of AI in various sectors.

Australia

Australia has been actively involved in AI regulation, focusing on promoting AI innovation while ensuring transparency and accountability. The country has released the “AI Action Plan” and the “AI Ethics Framework” to guide the responsible use of AI.

The EU AI Act marks the start of a more widespread movement toward AI governance and regulation. We could expect more industry-specific and general AI laws as AI spreads throughout more industries. These restrictions will affect businesses and investors looking to use AI’s enormous potential and cutting-edge technology. 

The AI regulations aim to control the potential risks of AI while encouraging its positive impact on society and the economy. These regulations also abide by core values like human rights, sustainability, transparency, and responsible risk management, as outlined by the OECD and G20.

Jurisdictions adopt a risk-based strategy, tailoring regulations according to perceived risks associated with AI, such as discrimination, lack of transparency, privacy infringements, and security vulnerabilities. Compliance obligations correspond to the level of risk, ensuring proportionate measures.

AI regulations are integrated into broader digital policy frameworks, encompassing cybersecurity, data privacy, and intellectual property protection. And the EU leads in comprehensive policy alignment. 

Prepare for the EU AI Act with Aporia

Aporia is a comprehensive AI control platform that goes beyond basic security. With its AI Guardrails solution, organizations can proactively secure their AI applications in real time, aligning with the EU AI Act regulations. 

With AI control in place, you ensure reliable and trustworthy AI, maintain brand reputation, and boost user trust. Dive into Aporia Guardrails:

Reach out to get a demo today!

Green Background

Control All your GenAI Apps in minutes