Resource Center / Resource Articles / Top 10 operational impacts of the EU AI Act
Top 10 operational impacts of the EU AI Act – Obligations on providers of high-risk AI systems
This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.
Published: July 2024
Contributors:
Download this series: The Top 10 operational impacts of the EU AI Act is available in PDF format.
Navigate by Topic
Providers of high-risk AI systems will need to know Chapter III of the EU AI Act very well. Sections 2 and 3 of Chapter III set out the requirements a provider must meet when making a high-risk AI system available on the EU market.
One way to categorize these different requirements is dividing them broadly into organizational, documentation, system design and regulatory requirements, while recognizing that certain articles perform dual roles.
Product safety, financial institutions and AI literacy
A different approach is available for products with an AI system that are also subject to the EU harmonization legislation listed in Section A of Annex I. These products are regulated by other EU directives and regulations, e.g., on machinery, safety of toys, lifts, medical devices and in vitro diagnostic medical devices.
In these circumstances, according to Article 8(2), providers "have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under" the EU harmonization law. This approach is encouraged to ensure consistency, avoid duplication and minimize additional burdens. For instance, a provider of a high-risk AI system can rely on a single set of technical documentation, as permitted under Article 11(2).
Likewise, the AI Act allows for providers that are financial institutions subject to the EU financial services law to avoid duplication in certain instances. The obligation to implement certain aspects of a quality management system under the AI Act can be fulfilled by the provider complying with the rules on internal governance arrangements or processes under EU financial services law, according to Article 17(4).
Under the requirements in Article 4 of Chapter III, a provider of any AI system, whether high risk or not, must ensure sufficient AI literacy within its organization. Staff and "other persons," presumably contractors, etc., who deal with the operation and use of the AI system are expected to have sufficient skills, knowledge and understanding to make an informed deployment of the AI system, as well as to be aware of the opportunities and risks of AI and the possible harm it can cause. Of course, this does not mean each individual needs to demonstrate the same level of AI literacy — this obligation takes into account the context in which the AI system will be used, who will use it and the affected persons.
Articles 8-22
This section provides an overview of the individual Articles 8-22 of Chapter III of the EU AI Act, which contains the core requirements on high-risk AI providers.
Articles 8-22 of the EU AI Act
-
expand_more
Article 8: Compliance with the requirements
Article 8 indicates high-risk AI systems must comply with the requirements under Section 2.
The heading of Section 2 is "Requirements for high-risk AI system," but it is not immediately obvious who is required to comply with the section's requirements. That becomes clear in Section 3, Article 16(1), which notes the provider of high-risk AI systems must "ensure that their high-risk AI systems are compliant with the requirements set out in Section 2." It is also the case that all the actors have a vested interest in ensuring the high-risk AI system complies with the Section 2 requirements. For instance, an importer is required to ensure the provider has drawn up the technical documentation, required under Article 11, before the high-risk AI system is placed on the market, as set out in Article 23(1)(b).
Additionally, the deployer is dependent on the provider complying with a number of its obligations in order for the deployer to meet its own obligations. For instance, the deployer needs to understand the "instructions for use," which the provider is required to produce under Article 13, and to be able to effectively use the measures the provider designs for human oversight as set out in Article 14. Part 4 in this series will discuss importer and deployer obligations concerning high-risk AI systems.
What are the operational implications?
Providers of high-risk AI systems in the EU must comply with Section 2 requirements. Article 8 acknowledges compliance can take into account the intended purpose of the high-risk AI system as well as the acknowledged state of the art on AI and AI-related technologies.
-
expand_more
Article 9: Risk-management systems
Article 9 requires providers to identify and manage risks associated with high-risk AI systems. By its very nature, a high-risk AI system is considered to carry greater risk whether from a product safety or a fundamental rights perspective. It is therefore unsurprising there is a requirement for the provider to establish, implement, document and maintain a risk-management system.
What are the operational implications?
A provider must:
- Identify the known or reasonably foreseeable risks associated with the AI system.
- Adopt appropriate and targeted risk management measures in view of the identified risks.
- Test AI systems to ensure the most appropriate risk-management measures are put in place.
The risk management system is not a one-off exercise that happens just before the AI system is launched on the EU market. It is a "continuous iterative process" that runs throughout the entire life cycle of the AI system. A provider should estimate and evaluate the risks that may emerge when the AI system is used for its intended purpose. It should also evaluate other risks possibly arising in light of data gathered from post-market monitoring, a requirement under Article 72, which is assisted by deployers providing data to the provider. A provider should ensure its contracts with deployers include a provision to require the deployer to provide information about the performance of the AI system to help the provider evaluate its compliance with the requirements in Articles 8–15.
Additionally, a provider should keep an eye out for and anticipate situations in which the AI system they have placed on the market is modified or white labeled by another actor, distributor, importer, deployer or other third party, so the actor becomes a provider of a high-risk AI system. This scenario engages the obligations under Article 25, which set out responsibilities along the AI value chain, including that the initial or original provider is required to closely cooperate with new providers of the AI system. The initial provider is expected to provide the necessary information and technical access, unless the initial provider originally specified that its nonhigh-risk AI system should not be changed into a high-risk AI system. What is stipulated in the contract the provider enters into, and the accompanying documents, will be key to setting limitations on how other actors can use the AI system.
Risk is defined in Article 3(2) of the AI Act as "the combination of the probability of an occurrence of harm and the severity of that harm," which encapsulates a relatively well-understood concept that is found in the International Organization for Standardization's ISO 14971, a risk-management standard for medical devices. The decision to take this approach to defining risk clearly links the concept of risk under the AI Act to the world of product safety and the potential for harm to individuals.
Once the risks are identified, the provider must select risk-management measures designed to address these risks. In selecting these measures, under Article 9(4) the provider must ensure they consider the effects and possible interaction resulting from the combined application of all the Section 2 requirements. With respect to its selection of risk-management measures, under Recital 65, the provider should also be able to "explain the choices made and, when relevant, involve experts and external stakeholders." The measures must be implemented so the AI system's residual risk is considered acceptable as set out under Article 9(5). Additionally, when identifying the most appropriate risk management measures, the provider must ensure:
- The elimination or reduction of risks as far as technically feasible through design and development of the AI system.
- Where appropriate, the implementation of adequate mitigation and control measures addressing risks that cannot be eliminated.
- The provision of information required under Article 13 on transparency and, where appropriate, training to deployers.
A provider must therefore be prepared to provide appropriate training on managing risks to the deployer using its high-risk AI. A provider must also test AI systems to identify the most appropriate and targeted risk-management measures and such testing can, in accordance with Article 60, include testing in real-world conditions. Providers also have an obligation to consider whether the intended purpose of the AI system means the system is likely to have an adverse impact on those under 18 years of age or other vulnerable groups, i.e., whether children or vulnerable groups are likely to be exposed to the AI system's operating environment and therefore could be affected.
-
expand_more
Article 10: Data and data governance
Datasets are central to operating an AI system. Article 10 requires the provider to implement data governance and management practices to ensure those datasets are appropriate. For instance, it specifically permits the use of special category data for bias-detection purposes.
Article 10 is primarily relevant to providers developing high-risk AI systems that make use of techniques involving the training of AI models with data. If the development of the high-risk AI system is not using techniques involving the training of AI models, then the requirements only apply to the testing datasets, according to Article 10(6). The requirements otherwise apply to training, validation and testing datasets.
What are the operational implications?
The management practices a provider must implement include, among others:
- Information about how the data was collected and the origin of the data. For personal data this means the original purpose of data collection.
- Relevant data-preparation processing operations, e.g., annotation, labeling and cleaning.
- Examination of possible biases likely to affect the health and safety of individuals, have a negative impact on fundamental rights or lead to discrimination that is prohibited.
- Appropriate measures to detect, prevent and mitigate possible biases identified.
- Identification of relevant data gaps or shortcomings that prevent compliance with the AI Act and how those gaps or shortcomings can be addressed.
A provider must also ensure all three types of datasets — training, validation and testing — are relevant, sufficiently representative and, as far as possible, free of errors and complete given the intended purpose. The datasets must also have the appropriate statistical properties and must account for the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional settings in which the AI system is intended to be used by deployers. Note, while a provider should be able to delineate the intended purpose of an AI system, it may not be able to anticipate all the various settings within which a deployer could use the AI system.
Article 10 specifically permits the processing of special category personal data as necessary to ensure bias detection and correction. However, in addition to complying with the EU General Data Protection Regulation, a provider must also meet additional conditions to use special category personal data for this purpose. These include:
- Demonstrating the use of other data, including synthetic data or anonymized data, is not sufficient to detect and correct bias.
- Ensuring the special category data is subject to strict controls on access and only authorized people have access to the data.
- Ensuring the data is not processed by other parties — although, as an observation, a strict interpretation of this requirement could mean a provider could not use third-party processors for this part of its AI governance framework.
- Ensuring the special category data is deleted once the bias has been corrected or the data reaches the end of its retention period.
-
expand_more
Article 11: Technical documentation
Meeting the obligations under Article 11 will likely require a fair amount of effort for providers. Article 11 requires a description, detailed in places, of technical aspects associated with the AI system, such as system architecture, training methodologies and cybersecurity measures. A provider must create this documentation before a high-risk AI system is placed on the market or put into service and keep it up to date.
What are the operational implications?
Since the main audience of the technical documentation are the national competent authorities and notified bodies, it must be prepared in a clear and comprehensive form. The technical documentation must show how the AI system complies with Section 2 requirements, but it must also, at a minimum, reflect the requirements set out in Annex IV. These are fairly extensive and include the following:
- A general description of the AI system.
- A detailed description of the AI system's elements and the process for its development.
- Information about the monitoring, functioning and control of the AI system.
- A description of the appropriateness of performance metrics for the specific AI system.
- A detailed description of the risk-management system.
- A description of relevant changes made by the provider to the system through its life cycle.
Small and medium-sized enterprises may provide the elements set out in Annex IV in a simplified manner, and the European Commission is required to establish simplified documents.
-
expand_more
Article 12: Record keeping
Given the importance of being able to trace automated actions by a high-risk AI system, especially if the operation of the system caused harm, but also to ensure the AI system functions in accordance with its intended purpose, under Article 12, a provider must design the AI system so it automatically records or logs events during its lifetime. See Article 19 for the corresponding retention periods for these logs.
What are the operational implications?
The AI system should be designed to record relevant events to identify situations that may result in the AI system being a "product presenting a risk" or involving a "substantial modification." These two scenarios essentially are designed to flag harm to individuals and significant changes to the AI system.
In Article 3(19) of the Market Surveillance Regulation 2019/1020, a product presenting a risk is defined as a product that has the potential to negatively affect individuals, "the environment, public security and other public interests, protected by the applicable Union harmonisation legislation, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the product concerned."
A substantial modification is defined in Article 3(23) of the AI Act and means a change to an AI system after it is placed on the market, which is not foreseen or planned in the initial conformity assessment carried out by the provider, so that compliance with Section 2 of Chapter III is affected or results in a change to the intended purpose of the AI system.
Additionally, the AI system should be designed to record events that facilitate the post-market monitoring system required under Article 72. The AI system should also be designed to record events that are relevant for monitoring the operation of high-risk AI systems referred to in Article 26(5), which refers to AI systems used by deployers.
If a provider has developed an AI system for remote biometric identification, "the logging capabilities shall provide, at a minimum:
a. recording of the period of each use of the system (state date and time and end date and time of each use);
b. the reference database against which input data has been checked by the system;
c. the input data for which the search led to a match;
d. the identification of the individual involved in the verification of the results, as referred to in Article 14(5)."
-
expand_more
Article 13: Transparency and provision of information to deployers
Article 13 is not concerned with transparency to individuals affected by the high-risk AI system that is covered by the GDPR when personal data is relevant. Instead, these requirements for providers are to ensure the AI system is developed so that it is sufficiently transparent for deployers. The operation of the AI system must enable deployers to interpret the system's output and use it appropriately. In particular, the design of the AI system must enable both the provider and deployer to comply with their obligations under Section 3.
What are the operational implications?
The provider must ensure its AI system is accompanied by concise, complete, correct and clear instructions for use so that it is relevant, accessible and comprehensive to deployers. The instructions for use must contain certain information, including:
- The identity and contact details of the provider and any authorized representative.
- The characteristics, capabilities and limitations of the performance of the high-risk AI system.
- Human oversight measures, referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the output of the high-risk AI systems by deployers.
- Details on the computational and hardware resources that deployers need to operate the AI system, its expected lifetime and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of the AI system, including software updates.
- Where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs, as set out in Article 12.
-
expand_more
Article 14: Human oversight
Unsurprisingly, the requirement for human oversight of high-risk AI systems is a core requirement. Under Article 14(1), providers must design and develop high-risk AI systems so they can be effectively overseen by an individual. The purpose of human oversight indicated by Article 14(2) is to prevent or minimize the risks to health, safety or fundamental rights that may emerge from the use of the AI system.
What are the operational implications?
The human oversight measures the provider implements in the AI system must be commensurate with the risks, autonomy and context of use for the AI system. Human oversight "shall be ensured through either one or both of the following types of measures:
a. measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; or
b. measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer."
The provider must deliver the AI system to the deployer so that those entrusted with human oversight can carry out certain activities, including:
- Understanding the relevant capacities and limitations of the high-risk AI system and duly monitoring its operation, including detecting and addressing anomalies, dysfunctions and unexpected performance.
- Maintaining an awareness of the possibility of automation bias.
- Correctly interpreting the high-risk AI system's output and considering the available interpretation tools and methods.
Remote biometric identification systems are subject to additional human oversight requirements given the significance of identifying an individual in this context.
-
expand_more
Article 15: Accuracy, robustness and cybersecurity
Under Article 15, a provider must design and develop the high-risk AI system so it achieves an appropriate level of accuracy, robustness and cybersecurity, and it performs consistently in those respects throughout its life cycle.
What are the operational implications?
A provider must be able to measure levels of accuracy, although help may come from the European Commission, which shall encourage the development of benchmarks and measurement methodologies. Under Article 13, providers must ensure the instructions of use they provide to deployers include the levels of accuracy and relevant accuracy metrics of AI systems.
Under Article 15, providers must ensure AI systems are "as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems." Further, AI systems "that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures."
Article 15 calls out certain cyber threats to AI systems such as data poisoning, model poisoning and adversarial examples. High-risk AI systems must be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities.
-
expand_more
Article 16: Other obligations on providers of high-risk AI systems
Article 16 signals the beginning of Chapter III, Section 3, "Obligations of providers and deployers of high-risk AI systems and other parties." Article 16 identifies all the requirements for providers under Section 2, as well as additional obligations on providers such as the need to affix a CE marking, which denotes European Conformity, to the AI system and registration of the AI system in the EU database.
Certain obligations are not referred to explicitly in Chapter III but fall under the scope of Article 16. The provider will:
- Ensure the provider's name, registered trade name or trademark, and address are indicated on the high-risk AI system or its packaging or documentation.
- Ensure the AI system undergoes the relevant conformity assessment as required by Article 43.
- Prepare an EU declaration of conformity as set out in Article 47.
- Affix the CE marking to the AI system and its packaging or documentation, according to Article 48.
- Register the AI system in the EU database when applicable, as set out in Article 49. This only applies to Annex III high-risk AI systems.
- Ensure the AI system complies with EU accessibility requirements.
What are the operational implications?
Article 16 lays out a useful checklist for providers of high-risk AI systems to understand their obligations, although it does not list out the Section 2 obligations in full and goes beyond Chapter III. A number of the obligations are proactive and must be achieved before placing the AI system on the market, whereas other obligations relate to the ongoing operation of the AI system or are reactive to events.
-
expand_more
Article 17: Quality-management system
Article 17 is similar to the accountability requirement under the GDPR. The quality-management system must show how the provider complies with the AI Act through written policies, procedures and instructions.
What are the operational implications?
The QMS document must include certain baseline requirements proportionate to the size of the provider's organization. These include:
- A strategy for regulatory compliance.
- Techniques, procedures and systematic actions to be used for the design, design control and design verification of the AI system.
- Techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the AI system.
- Examination, test and validation procedures to be carried out before, during and after development of the AI system, and the frequency with which they have to be carried out.
- Technical specifications, including safeguards, to be applied.
- The creation, implementation and maintenance of a post-market monitoring system, i.e., to collect, document and analyze data provided by deployers or collected through other sources on the performance of the high-risk AI system throughout its lifetime.
Providers must document their QMS in a way that reflects the high-risk AI system and their organization.
-
expand_more
Article 18: Documentation keeping
Article 18 requires the provider to retain certain key documents about the AI system for 10 years in case the national authorities want to see them.
What are the operational implications?
A provider needs to have a secure repository to keep the relevant documentation, including technical documentation, QMS and EU declaration of conformity. They should be ready to provide this to the national authorities.
-
expand_more
Article 19: Automatically generated logs
Article 19 requires the provider to retain logs automatically generated by the AI system for at least six months.
What are the operational implications?
The provider needs to implement a mechanism for storing logs, which can be easily accessed by date or time. It must also ensure it has considered the appropriate retention period for the logs generated, especially those that contain personal data.
-
expand_more
Article 20: Corrective actions and duty of information
Under Article 20, if a provider considers that a high-risk AI system it has already placed on the market or put into service does not conform with the AI Act, it must immediately take the necessary corrective action to bring it into conformity or withdraw, disable or recall it, as appropriate.
What are the operational implications?
A provider needs to know how the AI system it is responsible for is being used. The provider should be able to obtain this awareness through implementing the post-market monitoring system mentioned under Articles 17 and 72.
If the provider learns the AI system is not in conformity, it must inform the distributors and, where applicable, the deployers, importers and authorized representative of the nonconforming AI system. Additionally, if there is a product-liability-related risk as set out under Article 79, it must immediately investigate the cause and, where applicable, inform the market surveillance authority and notified body.
-
expand_more
Article 21: Cooperation with competent authorities
Providers must provide relevant information to the authorities on request and give them access to the logs referred to in Article 12.
What are the operational implications?
Providers should ensure their personnel can recognize a request from an authority by providing relevant training and awareness. Once a request is received from an authority, the provider must respond promptly.
-
Article 22 requires providers not established in the EU to appoint an authorized representative in the EU under a mandate.
What are the operational implications?
The mandate between the provider and representative must empower the representative to carry out certain tasks verifying the EU declaration of conformity and technical documentation for the AI system, to keep documents and information for the competent authorities for 10 years after the AI system is placed on the market, and to cooperate with the authorities. A representative may also terminate the mandate if it has reason to believe the provider is acting contrary to its obligations under the AI Act.
Annex
Articles 8-15 comprise Section 2 of Chapter III and Articles 16-27 make up Section 3. If we treat Articles 8-22 as containing the core requirements on high-risk AI providers, it is possible to break down the requirements as set out in the table below. Please note our assessment of the obligations for the use of high-risk AI systems by other actors who are not providers will follow in Part IV of this series.
Article 8: Compliance with the requirements
Section 2
Organizational process
Article 9: Risk-management system
Articles 72, 13 and 60
Documentation and system design
Article 10: Data and data governance
None
System design
Article 11: Technical documentation
Annex IV
Documentation
Article 12: Record keeping
Articles 79, 72 and 26
Documentation and system design
Article 13: Transparency and provision of information to deployers
Articles 12, 14 and 15
System design
Article 14: Human oversight
Annex III
System design
Article 15: Accuracy, robustness and cybersecurity
None
System design
Article 16: Obligations of providers of high-risk AI systems
All of Section 2 of Chapter III, Articles 17-20, Articles 43, 47-49
Organizational, documentation, system design and regulatory
Article 17: Quality-management systems
Articles 9, 72 and 73
Documentation
Article 18: Document keeping
Articles 11, 17 and 47
Documentation
Article 19: Automatically generated logs
Article 12
Documentation
Article 20: Corrective actions and duty of information
Article 79
Regulatory
Article 21: Cooperation with competent authorities
Article 12
Regulatory
Additional resources
-
expand_more
EU AI Act resources
-
expand_more
General AI resources
Top 10 operational impacts of the EU AI Act
The full series in PDF format can be accessed here.
- Part 1: Subject matter, definitions, key actors and scope
- Part 2: Understanding and assessing risk
- Part 3: Obligations on providers of high-risk AI systems
- Part 4: Obligations on nonproviders of high-risk AI systems
- Part 5: Obligations for general-purpose AI models
- Part 6: Governance: EU and national stakeholders
- Part 7: AI assurance across the risk categories
- Part 8: Post-market monitoring, information sharing and enforcement
- Part 9: Regulatory implementation and application alongside EU digital strategy
- Part 10: Leveraging GDPR compliance