AI Regulations in China
AI Regulations in the European Union (EU)
AI Regulations in the US
AI Regulations in India
Model safety
Synthetic & Generative AI
MLOps
Model Performance
ML Monitoring
Explainable AI
AI Regulations in the European Union (EU)

The EU AI Act

The world's first comprehensive law on artificial intelligence (AI)

On 8 December 2023, the European institutions reached a provisional political agreement on the world's first comprehensive law on artificial intelligence: the new AI Act. This comes after a trialogue involving the EU Commission, Council, and Parliament.

In April 2021, the European Commission proposed the first EU regulatory framework for AI, its inaugural attempt at regulating AI within its digital strategy. With the essential elements of the AI Act now settled, the procedural course toward its enactment remains ongoing within the legislative process.

Foundation models trained using a significant amount of computational power will be considered systemically important. As a result, they will be subjected to additional regulations requiring transparency and cybersecurity measures. The act also aims to prohibit certain uses of AI, such as emotion recognition systems utilized in workplaces or schools. Furthermore, AI systems used in high-risk areas such as law enforcement and education will be subjected to strict regulations. Noncompliance with these regulations could result in fines of up to 7% of global revenue.

The Commission put forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:

  • Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
  • Ensure legal certainty to facilitate investment and innovation in AI;
  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

The focus of the forthcoming AI Act centres on the promotion of trustworthy AI. The rules and obligations collectively aim to establish a comprehensive framework for the responsible and ethical development, deployment, and use of AI technologies. Some of the key focus areas include:

Fundamental rights

The Act focus heavily on priortising and safeguarding the fundamental rights of people. The proposal acknowledges that AI, due to its characteristics like opacity and dependency on data, can impact fundamental rights outlined in the EU Charter. It aims to safeguard these rights by employing a defined risk-based approach to address potential risks. 

Transparency

The AI Act mandates clear instructions and information about potential risks to fundamental rights and discrimination. Transparency must be prioritized in the design and development of high-risk AI systems, with obligations for users and providers outlined in Chapter 3. Specific AI systems must disclose automated AI-generated content resembling authentic material, except for law enforcement or freedom of expression purposes. These transparency measures aim to empower individuals in understanding and navigating AI systems and content.

Monitoring and reporting obligations

The AI Act mandates providers of high-risk AI systems to establish a post-market monitoring system to gather, document, and analyze relevant data on their performance. Providers have monitoring and reporting obligations, including investigating AI-related incidents and malfunctions. Authorities oversee compliance with these obligations for high-risk AI systems already in the market. Providers of high-risk AI systems placed in the Union market must promptly report any serious incidents or malfunctions breaching Union law obligations safeguarding fundamental rights to the market surveillance authorities in the Member States where the incident occurred.

Documentation

The AI Act mandates detailed technical documentation for high-risk AI systems to ensure transparency and accountability. This documentation needs to include system characteristics, capabilities, limitations, algorithms, data details, training, testing, validation processes, and risk management documentation. The technical documentation must be created before market placement or usage and must stay updated. Article 11 specifies the requirements for this technical documentation, emphasizing its role in demonstrating compliance with the Act's standards.

Reference: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Liked the content? you'll love our emails!

Thank you! We will send you newest issues straight to your inbox!
Oops! Something went wrong while submitting the form.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.