Resource Center / Resource Articles / Top 10 operational impacts of the EU AI Act

Top 10 operational impacts of the EU AI Act – AI assurance across the risk categories

This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.


Published: September 2024


Contributors:


Download this series: The Top 10 operational impacts of the EU AI Act is available in PDF format.

Navigate by Topic

Previous articles in this series have reflected on the scope and oversight requirements of the EU AI Act. Given the vast uses of AI, as well as the significant potential harm these systems carry, the assurance requirements embedded into the act provide important checks and balances.

While AI assurance is not defined in the AI Act, it is increasingly used in the AI ecosystem and is inspired by assurance mechanisms in other industries, such as accounting and product safety. The U.K. government defines assurance as "the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. In the case of AI, assurance measures, evaluates and communicates the trustworthiness of AI systems." Similar mechanisms in the AI Act include a spectrum of oversight functions, including standards, conformity assessments and audits.

In relation to the AI Act, AI assurance mechanisms establish comprehensive processes and activities to measure and ensure a given AI system or general-purpose AI model adheres to specific obligations and requirements. Assurance is distinct from compliance. While compliance involves meeting set standards and regulations, assurance encompasses a broader and deeper evaluation to build confidence in an AI system's reliability and safety.


Broad vs. narrow perspectives of assurance

Assurance can be understood in broad and narrow senses.

Broad perspective

From a broader standpoint, assurance covers all actions taken to ensure and measure the compliance of an AI system with relevant rules, in this case, the AI Act. This includes ongoing monitoring, internal audits and assessments to provide a holistic view of the system's adherence to regulations.

Narrow perspective

In a narrower sense, assurance can be distinguished from official conformity assessment procedures. It refers specifically to private or internal evaluations that do not carry the formal recognition of official assessments but still contribute to the overall trustworthiness and reliability of the AI system.

These perspectives are often used interchangeably. It is important to keep this mind when discussing AI assurance mechanisms.

Utilizing AI assurance tools can undoubtedly aid in measuring and mitigating risks, thereby facilitating preparation for compliance with the AI Act and the conformity assessments thereof. However, certifications from private AI assurance tools or companies do not indicate that an AI system or model officially meets the AI Act's requirements and do not carry the legal weight of an official conformity assessment. To fully grasp the role and implications of AI assurance under the AI Act, it is essential to examine the interaction between AI assurance and AI Act compliance.


Interaction of AI assurance with AI Act compliance

The EU AI Act, as a pioneering legislative framework, introduces a risk-based regulatory approach to AI governance. It provides a set of different requirements and obligations for AI systems and general-purpose AI models depending on their risk classifications. The obligations also vary for each type of operator, with the provider having the most stringent obligations.

The AI Act does not have a general one-stop compliance route or compliance assessment procedure. As the most strictly regulated subject under the act, high-risk AI systems have a specific procedure to assess whether a given high-risk AI system meets the requirements for these systems — the conformity assessment procedure. All high-risk AI systems must undergo a conformity assessment before entering the market. Conformity assessments ensure high-risk systems meet the stringent requirements before being marketed in the EU. On the other hand, the conformity assessment procedure is formalistic in terms of scope. It is provided only for AI systems and not for general-purpose AI models. It covers only the requirements provided under Chapter III Section 2 of the AI Act and not all obligations, which vary for different types of actors.

Depending on factors such as the system's intended purpose and the use of official EU standards, the assessment may be internal or require third-party involvement. There are two main conformity assessment procedures under the act.

Under the internal self-assessment, outlined in Annex VI, providers can conduct internal control procedures, which involve verifying that their quality management system and technical documentation comply with the AI Act's requirements.

The second conformity assessment procedure is the external assessment by notified bodies, outlined in Annex VII. Alternatively, providers may need to undergo an external assessment involving a third-party notified body. This procedure includes a detailed review of the QMS and technical documentation to ensure compliance with the act. This type of conformity assessment procedure can only be conducted by a notified body authorized by a notifying authority, allowing a certification issued by the notified body conducting the assessment. This certification marks the official recognition that a given AI system meets the requirements provided for high-risk AI systems. Though not officially recognized, AI assurance actions play a crucial role in supporting and preparing for this compliance.


What type of conformity assessment must be conducted?

Conformity assessments are governed by Article 43 of the AI Act. The conformity assessment procedure that must be used is dependent on the type of high-risk AI system.

AI systems used in biometrics. The provider of the system can choose either the internal or external conformity assessment procedure. However, if the harmonized standards or common specifications are not available or if they are available but only partially complied with, then the provider must follow the external conformity assessment procedure per Article 43(1).

AI systems used in other sectors provided under Annex III. The provider, in principle, shall follow the conformity assessment procedure based on internal control per Article 43(2).

High-risk AI systems based on union harmonization legislation. The provider shall follow the relevant conformity assessment procedure as required under those legal acts.


What are harmonized standards?

Harmonized standards are technical standards, and conformity with them triggers a presumption of conformity with the requirements provided under Section 2 of the act to the extent that a given standard covers a requirement provided thereunder. Technical standards are one of the most important practical tools provided under the AI Act but are not yet available.

Only standards issued by the European standardization authorities, which are the European Committee for Standardization, the European Committee for Electrotechnical Standardization and the European Telecommunications Standards Institute, can be harmonized with the AI Act, provided they are issued after following the required procedure. In May 2023, the European Commission issued an implementing decision on standardization requesting the CEN and CENELEC to draft new European standards or European standardization deliverables supporting EU AI policy, as listed in Annex I, by 30 April 2025.

It is important to note, despite their practical value associated with industry practices, International Organization for Standardization and International Electrotechnical Commission standards are not harmonized standards with the AI Act. In other words, compliance with other standards will not automatically denote compliance with the AI Act.


What are common specifications?

Common specifications are tools to aid in compliance with requirements under Section 2 when there is no harmonized standard despite the Commission's request and one will not be published in the Official Journal of the European Union within a reasonable time. In other words, common specifications are temporary substitutions for the harmonized standards issued by the European Commission rather than EU standardization bodies.


Where is AI assurance in the picture?

While private AI assurance tools do not provide the official conformity stamp, they are valuable in establishing practical trust. The level of trust depends on the credibility and expertise of the firm conducting the assurance. These tools can evaluate an AI system's compliance readiness, identify potential risks and suggest improvements to enhance compliance with the AI Act. Private AI assurance tools or techniques are not official conformity assessment procedures. More specifically, these are neither harmonized standards nor common specifications of the AI Act. However, these tools may significantly facilitate the preparation process for compliance with AI Act requirements or obligations.

There are two main functions a private AI assurance tool can play in the compliance journey of a given AI system with the AI Act.

  • expand_more

  • expand_more

  • expand_more


Where are general-purpose AI models located in the AI assurance scheme?

General-purpose AI models are governed by Articles 51-55. Unlike high-risk AI systems, general-purpose AI models are not subjected to mandatory conformity assessment procedures. Depending on their functions, private AI assurance tools or mechanisms may facilitate a given general-purpose AI model in achieving compliance with the requirements.

The AI Office is empowered to encourage and facilitate the drawing up of codes of practice at the EU level per Article 56(1). These codes of practice can be relied upon to show compliance with the respective set of obligations until a harmonized standard is published. Providers of a general-purpose AI model that do not adhere to an approved code of practice or comply with a published harmonized standard must demonstrate alternative adequate means of compliance for assessment. Here, private assurance tools may qualify as alternative adequate means of compliance with the act.


Conclusion

AI assurance is an essential part of AI governance and must be understood as a concept that is distinct from compliance for the purposes of the AI Act. While compliance focuses on meeting established standards through formal conformity assessments, assurance offers a broader and ongoing evaluation to build trust and ensure long-term reliability. Together, they contribute to a safe and trustworthy AI market, aligning with the goals of the AI Act. AI assurance tools and mechanisms may facilitate or support compliance with the AI Act or function as alternative means of compliance for the purposes of the general-purpose AI models. However, as notified bodies do not provide the tools and mechanisms for compliance, they cannot officially recognize compliance with the AI Act and will trigger legal implications and presumptions.


Additional resources


Top 10 operational impacts of the EU AI Act

The full series in PDF format can be accessed here.



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 3

Submit for CPEs