Decoding the EU's AI Act: Implications and Strategies for Businesses

Ketaki JoshiKetaki Joshi
Ketaki Joshi
May 7, 2024
Decoding the EU's AI Act: Implications and Strategies for Businesses
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

On March 13, 2024, the European Parliament adopted the world's first comprehensive law on artificial intelligence: the new AI Act. This comes after a trialogue involving the EU Commission, Council, and Parliament.

In April 2021, the European Commission proposed the first EU regulatory framework for AI, its inaugural attempt at regulating AI within its digital strategy. With the essential elements of the AI Act now settled, the procedural course toward its enactment remains ongoing within the legislative process. This phase offers a timely opportunity for businesses to proactively prepare and establish operational frameworks that guarantee immediate compliance with the AI Act once it comes into force.

This blog outlines the key topics of the first comprehensive horizontal legal framework for AI. Additionally, we offer insights into actions organizations should take to get ready. 

Introduction

The Commission introduced a proposed regulatory framework for Artificial Intelligence, aiming to achieve the following objectives:

  • Ensure the safety and adherence to existing laws regarding fundamental rights and Union values for AI systems deployed and utilized within the Union market.
  • Provide legal clarity to promote investment and innovation in AI.
  • Strengthen governance and ensure the effective enforcement of existing laws governing fundamental rights and safety standards for AI systems.
  • Foster the creation of a unified market for lawful, safe, and dependable AI applications while mitigating market fragmentation.

The risk-based approach

The AI Act sets forth responsibilities for AI according to the potential risks and impact on individuals and society. Depending on their potential risks, AI systems are categorized as low-risk or high-risk. Furthermore, certain AI systems are prohibited.

After evaluating four policy options varying in regulatory scope, the Commission endorsed 'Option 3+' as the preferred approach. This option entails implementing a ‘Horizontal EU legislative instrument following a proportionate risk-based approach + codes of conduct for non-high-risk AI systems’. Under this preferred option, a regulatory framework exclusively addresses high-risk AI systems while offering providers of non-high-risk AI systems the opportunity to adhere to a code of conduct voluntarily. The requirements are focused on various facets such as data, documentation and traceability, provision of information and transparency, human oversight, robustness and accuracy, and would be mandatory for high-risk AI systems. 

An AI system is deemed high-risk when two conditions are met:

  1. The AI system is meant to serve as a safety component or is a product covered by the Union's harmonization legislation listed in Annex II.
  2. The product, which includes the AI system as a safety component or the AI system itself as a product, must undergo a third-party conformity assessment according to the Union's harmonization legislation listed in Annex II before being placed on the market or put into service.

The forthcoming AI Act focuses on promoting trustworthy AI. The rules and obligations collectively aim to establish a comprehensive framework for the responsible and ethical development, deployment, and use of AI technologies.

Fundamental rights

The Act focuses heavily on prioritising and safeguarding the fundamental rights of people. The proposal acknowledges that AI, due to its characteristics like opacity and dependency on data, can impact fundamental rights outlined in the EU Charter. It aims to safeguard these rights by employing a defined risk-based approach to address potential risks. 

Obligations concerning pre-testing, risk management, and human oversight aim to mitigate the chances of biased or erroneous AI-based decisions, especially in critical areas like education, employment, law enforcement, and the judiciary, thereby respecting fundamental rights. It lays out that transparency and traceability in AI systems, along with robust post-deployment controls, will enable effective redress for individuals affected by potential fundamental rights violations.

Transparency

The AI Act underscores transparency as crucial for high-risk AI systems to counter their complexity, ensuring users can understand and effectively use them. Accompanying documentation with clear instructions is mandated, including information about potential risks to fundamental rights and discrimination. Article 13 specifies the need to prioritize transparency in high-risk AI systems' design and development, aligning with obligations for users and providers outlined in Chapter 3.

Title IV of the Act delves into transparency obligations for specific AI systems, categorizing them based on interaction with humans, emotion detection, or content manipulation. Users interacting with these systems must be informed, and AI-generated content resembling authentic material should be disclosed as automated unless for law enforcement or freedom of expression purposes. These transparency measures aim to empower individuals to understand and navigate AI systems and content.

Monitoring and reporting obligations

Article 61 of the AI Act mandates providers of high-risk AI systems to establish a post-market monitoring system, scaled to the nature of the AI technologies and associated risks. This system must actively gather, document, and analyze relevant data on the performance of these systems throughout their lifespan. The aim is to enable providers to continually assess whether these AI systems comply with the requirements outlined in Title III, Chapter 2.

Under Title VIII, providers have monitoring and reporting obligations, including investigating AI-related incidents and malfunctions. Market surveillance authorities oversee compliance with these obligations for high-risk AI systems already in the market.

Additionally, Article 62 specifies that providers of high-risk AI systems placed in the Union market must promptly report any serious incidents or malfunctions breaching Union law obligations safeguarding fundamental rights to the market surveillance authorities in the Member States where the incident occurred. This notification must occur immediately upon establishing a link between the AI system and the incident or malfunction, within 15 days of the provider's awareness of the issue.

Technical robustness

The AI Act emphasizes that high-risk AI systems must prioritize technical robustness. These systems must withstand various limitations such as errors, faults, inconsistencies, and unexpected situations. Additionally, they should be resilient against malicious actions that could compromise security and lead to harmful or undesirable behavior. Failing to safeguard against these risks might result in safety issues or infringements on fundamental rights due to erroneous decisions or biased outputs from the AI system.

Human oversight

Article 14 of the AI Act emphasizes that high-risk AI systems must be designed to allow effective human oversight while the AI system is in use. This oversight aims to prevent or minimize risks to health, safety, or fundamental rights that may arise when using the AI system as intended or under foreseeable misuse conditions. Human oversight can be implemented through various measures, such as building oversight capabilities into the AI system before it's introduced to the market or specifying appropriate oversight measures for users to implement.

The measures for human oversight must enable individuals overseeing the AI system to:

  1. Understand the system's capabilities and limitations to monitor its operation for anomalies or unexpected performance.
  2. Be aware of potential biases in relying too heavily on the AI system's output.
  3. Interpret the AI system's output correctly considering system characteristics and available interpretation tools.
  4. Have the authority to choose not to use or override the AI system's output in specific situations.
  5. Intervene or stop the AI system's operation through designated procedures.

For certain high-risk AI systems, additional measures are mandated, requiring confirmation by at least two natural persons before any action or decision is taken based on the system's identification output.

Mitigating Bias and Ensuring Safety

High-risk AI systems are allowed on the market or into service within the Union only if they meet specific mandatory requirements. These requirements aim to prevent these systems, or their outputs used within the Union, from posing unacceptable risks to significant public interests as recognized and protected by Union law.

In Article 15, the focus is on high-risk AI systems that continue to learn post-market release. These systems must be developed to address potential biases caused by feedback loops, ensuring appropriate measures mitigate biased outputs used as inputs for future operations. Additionally, obligations regarding testing, risk management, and human oversight aim to minimize errors or biases in AI-assisted decisions, thus upholding fundamental rights.

Documentation

The AI Act mandates detailed technical documentation for high-risk AI systems to ensure transparency and accountability. This documentation, created before market placement or usage, needs to stay updated and include essential information. This encompasses system characteristics, capabilities, limitations, algorithms, data details, training, testing, validation processes, and risk management documentation. Article 11 specifies the requirements for this technical documentation, emphasizing its role in demonstrating compliance with the Act's standards. It outlines the minimum elements the documentation must contain, as detailed in Annex IV. 

Additionally, a single technical document combining all required information is necessary for high-risk AI systems associated with products falling under specific legal acts listed in Annex II. 

Next steps: How can organizations prepare for the AI Act?

Given the extensive scope and implications of the AI Act, being ahead of the curve in aligning with the act is crucial.

Organizations must focus on developing an internal framework guiding AI development and deployment, which aligns with the fundamental rights protections outlined in the AI Act, emphasizing fairness, accountability, and transparency in AI usage. Identifying potential gaps between existing practices and the requirements outlined in the AI Act will serve as a foundational step in understanding areas needing improvement.

  • Transparency: Transparency remains pivotal. Organizations should ensure high-risk AI systems prioritize transparency in design and development. The accompanying documentation should provide clear instructions and pertinent information about potential risks to fundamental rights and discrimination, ensuring users comprehend and navigate these systems effectively.
  • Monitoring: Organizations need to institute robust post-market monitoring systems aligned with the AI Act's requirements. These systems play a critical role in ensuring ongoing compliance of AI systems. By actively collecting and analyzing pertinent data, these monitoring systems facilitate continuous assessment, guaranteeing that AI systems meet regulatory standards outlined in the AI Act.
  • Robust risk mitigation strategies: Technical robustness, as emphasized in the AI Act, demands organizations prioritize resilience against errors, faults, and malicious actions. Organizations must establish robust risk mitigation strategies, especially focusing on bias mitigation, transparency enhancement, and ensuring fundamental rights protections.
  • Documentation Review and Update: Review and update technical documentation for high-risk AI systems. Ensure that these documents cover all required elements specified in the AI Act, including system characteristics, algorithms, data usage, risk management, and compliance evidence. A meticulous approach to documentation ensures readiness for compliance evaluations by national competent authorities or notified bodies, demonstrating adherence to the AI Act's fundamental rights protections.
  • Human oversight: Effective human oversight measures, in line with Article 14, should be integrated into AI systems. These measures should enable individuals overseeing the system to comprehend its operation, identify biases, interpret outputs accurately, and intervene when necessary.

Effectively, to navigate the intersection of AI regulations and the dynamic opportunities AI presents, organizations must prioritize elements of AI governance, AI alignment, and ML observability. By doing so, businesses can foster trust, mitigate risks, and unlock the full potential of AI innovation in a responsible manner.

The AI Act represents a significant shift in regulating AI, emphasizing trustworthiness and fundamental rights protection. Organizations must proactively align with its provisions to ensure compliance and responsible AI deployment.

References:

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.