Resource Center / Resource Articles / Top 10 operational impacts of the EU AI Act
Top 10 operational impacts of the EU AI Act – Obligations on nonproviders of high-risk AI systems
This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.
Published: August 2024
Contributors:
Download this series: The Top 10 operational impacts of the EU AI Act is available in PDF format.
Navigate by Topic
What should you do if you are using or handling a high-risk AI system but are not a provider? The AI Act imposes a comprehensive set of obligations on deployers, importers, distributors and authorized representatives to ensure the safe and compliant use of high-risk AI systems within the EU.
These stakeholders must work together to uphold the principles of transparency, safety and accountability to foster trust and innovation in the AI ecosystem. Through diligent adherence to these regulations, the potential risks associated with AI can be mitigated, ensuring the benefits of AI are realized while protecting the fundamental rights and safety of individuals. The obligations imposed on deployers, importers, distributors and authorized representatives appear in Chapter II, Section 3.
Obligations of deployers
Deployers have a general obligation to take appropriate technical and organizational measures to ensure they are using high-risk AI systems in accordance with the instructions for use accompanying such systems. Deployers may be subject to further obligations under EU or national law in addition to those in the AI Act.
-
expand_more
AI literacy
Although Article 4 does not appear in Chapter II, Section 2, it does appear at the end of Chapter I and imposes an obligation on deployers to ensure their staff and others dealing with the operation and use of AI systems have sufficient AI literacy. Deployers should consider the technical knowledge, experience, education and training; the context of how the AI systems will be used; and the people using them.
Under Article 3, the objective of AI literacy is to enable deployers and other parties to make informed decisions about AI systems and their deployment, as well as to gain awareness about the opportunities, risks and possible harms AI can cause. These notions may vary depending on the context the AI is used in, which suggests deployers must adapt their training to be meaningful for their staff, so the staff can acquire the necessary skills and knowledge about the AI being deployed.
For deployers, AI literacy may include understanding the correct application of technical elements during the AI system's development phase, the measures to be applied during its use or the suitable ways to interpret the AI system's output.
The EU AI Board is required to support the European Commission in promoting AI literacy tools, as well as public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems.
-
expand_more
Due diligence
Deployers are generally not subject to due diligence requirements, unlike importers and distributors of high-risk AI systems.
However, deployers that are public authorities and EU institutions, bodies, offices or agencies must first verify the high-risk AI system they intend to use has been properly registered in the EU database in accordance with Article 49. They must also refrain from using an AI system that has not been registered.
-
expand_more
Compliance and monitoring
Deployers are required to monitor the operation of the high-risk AI system based on the instructions for use they receive from the provider. When relevant, they must give feedback to the provider, which enables the provider to comply with the system's post-market monitoring obligations.
If the deployer has any reason to believe use of the high-risk AI system in accordance with the instructions for use may result in it presenting a risk to the health, safety and fundamental rights of individuals, see Article 79(1), the deployer must inform the provider or distributor and the relevant market surveillance authority without undue delay and suspend further use of that system.
Financial institutions that are deployers are deemed to have fulfilled their monitoring obligation if they have complied with the rules under EU law on internal governance arrangements, processes and mechanisms that apply to the financial sector.
When Article 35 of the EU General Data Protection Regulation requires deployers to carry out data protection impact assessments, they may rely on the instructions for use given by the provider for assistance.
-
expand_more
Human oversight
Deployers must assign human oversight to natural persons who have the necessary competence, training, authority and support to oversee the use of high-risk AI systems. Deployers remain free to organize their own resources and activities to implement the human oversight measures indicated by the providers.
To the extent the deployer exercises control over the input data, it must ensure the input data is relevant and sufficiently representative of the high-risk AI system's intended purpose.
-
expand_more
Incident reporting
If a deployer has identified a serious incident, it must immediately inform the provider, then the importer or distributor and the relevant market surveillance authorities.
-
expand_more
Transparency and information
Before "putting to service" or using a high-risk AI system in the workplace, deployers who are employers must inform affected workers and their representatives that they will be subject to the use of the high-risk system, in accordance with the rules and procedures under EU or national law.
Regardless of the transparency requirements that apply to certain AI systems, such as chatbots, generative AI, deepfakes or emotion-recognition systems, deployers of high-risk AI systems referred to in Annex III, those concerned with fundamental rights rather than product safety, that make or assist in making decisions related to natural persons must inform them about the use of the high-risk AI system.
-
expand_more
Recordkeeping
When a high-risk AI system processes personal data, deployers must keep the automatically generated logs for a period appropriate to the system's intended purpose, which is at least six months, unless indicated otherwise in applicable EU or national law, particularly the GDPR.
Financial institutions that are deployers must maintain the logs as part of the documentation they are required to keep in accordance with EU laws applicable to the financial sector.
-
expand_more
Cooperation with the authorities
Deployers must cooperate with the relevant competent authorities in any action they take regarding the high-risk AI system to implement the AI Act. Deployers of post-remote biometric identification systems must also submit annual reports to the relevant market surveillance and data protection authorities. Specific additional obligations apply to deployers of post-remote biometric identification systems.
-
expand_more
Fundamental rights impact assessment
Article 27 of the AI Act mandates deployers of high-risk AI systems to conduct fundamental rights impact assessments to evaluate and mitigate potential adverse impacts on fundamental rights. However, unlike data protection impact assessments imposed on any controller of a high-risk processing activity, FRIAs are limited to certain high-risk AI systems and some deployers. Significantly, providers are not required to carry out FRIAs because deployers decide to use AI systems for specific concrete purposes.
FRIAs only apply to high-risk AI systems that fall under Article 6.2 of the AI Act. Deployers of high-risk AI systems that are intended to be used as safety components of products or are products themselves, covered by the EU harmonization legislation and pursuant to Article 6.1 of the AI Act, are not required to carry out FRIAs. High-risk AI systems that fall under Article 6.1 are already required to undergo third-party conformity assessments before they are placed on the market or put into use.
Furthermore, deployers of high-risk AI systems that are intended to be used as safety components in the management and operation of critical digital infrastructure or road traffic or in the supply of water, gas, heat or electricity are also excluded from the obligation to carry out FRIAs.
For all other high-risk AI systems that fall under Annex III, the obligation to carry out FRIAs is limited to the following categories of deployers:
- Public law entities.
- Private operators that provide public services, such as education, health care, social services, housing and administration of services.
- Other private operators, such as banks and insurance entities, that deploy AI systems to evaluate creditworthiness, establish a credit score, and assess risk and pricing for life and health insurance.
This means, in practice, many deployers using high-risk AI systems will not be required to perform FRIAs though they may still need to carry out DPIAs.
FRIAs must contain the following information:
- A description of the deployer's processes that will use the high-risk AI system in line with its intended purpose.
- A description of the period in which and the frequency with which each high-risk AI system is intended to be used.
- The categories of natural persons and groups likely to be affected by the system's use in the specific context.
- The specific risks of harm likely to impact the categories of persons or groups of persons identified pursuant to the last bullet above, taking into account the information given by the provider pursuant to Article 13, i.e., instructions for use.
- A description of the implementation of human oversight measures, according to the instructions for use.
- The measures to be taken when those risks materialize, including the arrangements for internal governance and complaint mechanisms.
Deployers are only required to carry out FRIAs for the first use of a high-risk AI system. They may rely on previously conducted FRIAs or existing impact assessments carried out by the provider. They may also rely on DPIAs if one has been carried out that reflects the FRIA's obligations. Deployers have an obligation to keep their FRIAs up to date. Once the assessment has been performed, deployers must notify the market surveillance authority of the results. This will involve submitting a completed template questionnaire developed by the AI Office.
By performing this assessment, deployers ensure their use of high-risk AI systems is compliant with legal standards and aligned with the broader ethical obligations to protect and promote fundamental human rights, thereby fostering greater public trust and acceptance of AI technologies.
Importers that place high-risk AI systems on the EU market are subject to rigorous checks to ensure compliance with the AI Act. See the table below for a comparison of the obligations of importers and distributors.
-
expand_more
Due diligence
Before importers place high-risk AI systems on the market, they must verify the AI system's conformity with the AI Act. In particular, they must verify:
- The provider has conducted the proper conformity assessment in accordance with Article 43.
- The provider has drawn up the technical documentation of the AI system.
- The AI system bears the required CE marking and is accompanied by the EU declaration of conformity and instructions for use.
- The provider has appointed an authorized representative in accordance with Article 22(1), when required.
-
expand_more
Compliance
Importers must indicate their name, registered trade name or registered trademark, and the address where they can be contacted on the high-risk AI system and its packaging or accompanying documentation, when applicable. While a high-risk AI system is under their responsibility, they should ensure the storage or transport conditions do not jeopardize the system's compliance with the requirements under Chapter III, Section 2 of the AI Act.
Importers should not place a high-risk AI system on the market if there is reason to believe it does not conform with the AI Act, is falsified or is accompanied by falsified documentation until the system has been brought into conformity. Presumably, the provider must bring the high-risk AI system into conformity, given that the importer is required in such cases to inform the provider about the risks of the AI system.
-
expand_more
Documentation and recordkeeping
Importers must keep a copy of the EU declaration of conformity, the certificate issued by the notifying body and the instructions of use for 10 years after the high-risk AI system is placed on the market or put into service. Importers must also ensure technical documentation is available for regulatory authorities upon request.
-
expand_more
Reporting
Importers must inform the provider of the high-risk AI system, the authorized representative and market surveillance authorities when it presents a risk to the health, safety or fundamental rights of persons within the meaning of Article 79(1).
-
expand_more
Cooperation with authorities
Importers are obligated to provide the relevant competent authorities with all necessary information and documentation related to the AI system, including technical information, upon reasoned request. They must also cooperate with relevant competent authorities on any action they take regarding a high-risk AI system the importers placed on the market to reduce and mitigate the risks it poses.
Distributors that make high-risk AI systems available on the EU market must ensure these systems comply with the AI Act's requirements. A number of these obligations are similar, but not identical, to those of the importers. See the table below for a comparison of the obligations of importers and distributors.
-
expand_more
Due diligence
Before making a high-risk AI system available on the market, distributors are required to verify that the system conforms with the AI Act. Distributors must verify that AI systems bear the CE marking and are accompanied by a copy of the EU declaration of conformity and instructions of use. They also need to ensure providers and importers have complied with their respective obligations, including conformity assessments.
-
expand_more
Compliance
While a high-risk AI system is under their responsibility, distributors must ensure the storage or transport conditions applied to such AI do not jeopardize their compliance with the requirements under Section 2, Chapter III of the AI Act.
Furthermore, based on the information in its possession, when a distributor believes a high-risk AI system does not comply with the requirements, it must not make the system available on the market until it complies. Presumably, the provider of the AI system will ensure compliance.
If the system has already been placed on the market, the distributor must take any corrective action needed to bring it into compliance, withdraw it, recall it or ensure the provider, importer or relevant operator takes corrective actions.
-
expand_more
Reporting
When a distributor believes a high-risk AI system that does not comply with Section 2, Chapter III of the AI Act presents a risk to the health, safety or fundamental rights of persons within the meaning of Article 79(1), it must inform the provider or the importer of the AI system before it is placed on the market. If the AI system has already been placed on the market, the distributor must immediately inform the provider or importer of the AI system and the relevant competent authorities of the member states, providing the details of noncompliance and any corrective actions taken.
-
expand_more
Cooperation with the authorities
Distributors must provide the relevant competent authorities with all necessary information and documentation regarding the distributor's actions to demonstrate the system's conformity with Chapter III, Section 2 of the AI Act, upon reasoned request. Distributors must cooperate with relevant competent authorities in actions regarding the high-risk AI systems they have made available on the market, in particular to reduce and mitigate their risks.
Comparison of importer and distributor obligations
Due diligence
Verify that the provider has:
• Conducted the proper conformity assessment in accordance with Article 43.
• Drawn up the technical documentation of the AI system.
• Appointed an authorized representative in accordance with Article 22(1), when required.
Verify that the AI system has the required CE marking and is accompanied by the EU declaration of conformity and instructions for use.
Verify that providers and importers have fulfilled their obligations, including conformity assessments.
Verify that AI systems bear the CE marking and are accompanied by a copy of the EU declaration of conformity and instructions for use.
Compliance
Indicate their name, registered trade name or registered trademark, and address on the AI packaging or its accompanying documentation.
Ensure the storage or transport of AI does not jeopardize the AI system's compliance with Section 2, Chapter III of the AI Act.
Not place any AI system that is noncompliant, falsified or accompanied by falsified documentation on the market until it has been brought into conformity.
Ensure the storage or transport of AI does not jeopardize the AI system's compliance with Section 2, Chapter III of the AI Act.
Not make any noncompliant AI system available on the market until it has been brought into conformity.
When a noncompliant AI system has been made available on the market, take the corrective actions necessary to bring it into conformity, withdraw it, recall it or ensure the provider, importer or any relevant operator takes those corrective actions as appropriate.
Recordkeeping
Keep a copy of the EU declaration of conformity, the certificate issued by the notifying body and the instructions of use for 10 years after the high-risk AI system is placed on the market or put into service.
Ensure technical documentation is available for regulatory authorities upon request.
N/A
Reporting
Inform the provider of an AI system, the authorized representatives and the market surveillance authorities, but not the deployer, of any risks posed by such AI system.
Inform the provider or the importer of an AI system of any risks, as defined under Article 79(1), posed by such AI system.
Immediately inform the provider or importer and the competent authorities when it has made an AI system that presents a risk, as defined under article 79(1), available on the market and that AI system does not comply with Section 2, Chapter III of the AI Act. This includes providing the details of noncompliance and any corrective actions taken.
Cooperation with the authorities
Provide the authorities with all necessary information and documentation related to the AI system, including technical information, upon reasoned request.
Cooperate in any action taken by the authorities in relation to high-risk AI systems.
Provide the authorities with all necessary information and documentation regarding actions taken to demonstrate conformity of an AI system with Section 2, Chapter III of the AI Act, upon reasoned request.
Cooperate with any action taken by the authorities in relation to high-risk AI systems made available on the market, in particular to reduce and mitigate the risks it poses.
Obligations of authorized representatives
The roles and obligations of authorized representatives are directly linked to providers. Providers that are established in third countries outside the EU are required to appoint authorized representatives in the EU prior to making their high-risk AI systems available on the EU market.
Authorized representatives can be any natural or legal person located or established in the EU who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to respectively perform and carry out the obligations and procedures established by the AI Act on behalf of the provider.
-
expand_more
Mandate and documentation
Authorized representatives must hold a written mandate from the provider specifying their tasks and responsibilities. They must also provide a copy of the mandate to market surveillance authorities upon request.
Pursuant to this mandate, the authorized representative is required to carry out the following tasks based on the steps below.
-
expand_more
Due diligence
The authorized representative must verify the EU declaration of conformity, the technical documentation referred to in Article 11, has been drawn up and the provider has carried out an appropriate conformity assessment procedure. Further, when a provider has registered a high-risk AI system in the EU database referred to in Articles 49 and 71, the authorized representative must verify that the information referred to in Point 3 of Section A of Annex VIII is correct.
-
expand_more
Compliance
Authorized representatives are obligated to terminate the mandate with a provider if they believe or have reason to believe the provider is acting contrary to its obligations pursuant to the AI Act. When applicable, the authorized representative must register the high-risk AI system in the EU database, per Articles 49 and 71, unless the provider has already done so.
-
expand_more
Recordkeeping
The authorized representative must keep the provider's contact details, a copy of the EU declaration of conformity, the technical documentation and the certificate issued by the notified body, when applicable, for 10 years after the high-risk AI system is placed on the market or put into service.
-
expand_more
Reporting
When the authorized representative decides to terminate the mandate with a provider, it must immediately inform the relevant market surveillance authority and the notified body, when applicable, including its reasons for doing so.
-
expand_more
Cooperation with the competent authorities
The authorized representative must act as the point of contact for national authorities within the EU. They must provide the competent authorities with documentation and information necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2 of the AI Act, including access to the logs, upon reasoned request. Also upon reasoned request, the authorized representative must cooperate with competent authorities on any action they may take about the high-risk AI system, in particular, to reduce and mitigate the risks it poses.
Responsibilities along the AI value chain
Article 25 of the AI Act considers situations in which a deployer, importer, distributor or third party of a high-risk AI system becomes a provider when specific actions are taken.
Indeed, any distributor, importer, deployer or other third party shall be considered a provider of a high-risk AI system and shall be subject to the obligations of the provider under Article 16 if:
- They put their name or trademark on a high-risk AI system that has already been placed on the market or put into service, regardless of the contractual arrangements between the parties.
- They make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system under Article 6.
- They modify the intended purpose of an AI system, including general-purpose AI systems, which was not classified as high-risk when it was placed on the market or put into service but became a high-risk AI system as a result of modification.
Under the AI Act, if a deployer, importer or distributor takes any of the above actions, the provider that initially put the high-risk AI system on the market or into service is no longer considered a provider of that specific AI system. Therefore, depending on the situation, there is a change of designated role and a shift of responsibility from the initial provider to the new provider, which was previously a deployer, importer or distributor.
Nonetheless, the initial provider must closely cooperate with any new providers, make the necessary information available and provide the reasonably expected technical access and other assistance required to enable the new providers to fulfil their obligations, especially regarding compliance with the conformity assessment of high-risk AI systems.
However, in its contract with a deployer, importer or distributor, the initial provider can stipulate that its AI system is not to be changed into a high-risk AI system. In those cases, the initial provider has no obligation to hand over the documentation. Presumably, this only applies to AI systems that are not classified as high-risk when they are initially placed on the market or put into service.
For high-risk AI systems that are safety components of products, product manufacturers are considered providers and, therefore, must comply with the obligations imposed on providers under Article 16 when either the high-risk AI system is placed on the market with the product under the name or trademark of the product manufacturer or the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market.
Finally, any third party that supplies an AI system or tools, services, components or processes used or integrated with a high-risk AI system must enter into a written agreement with the provider. The agreement should specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art to enable the provider of the high-risk AI system to fully comply with its obligations under the AI Act. This requirement does not apply to third parties that make tools, services, processes or components accessible to the public under a free and open license, except general-purpose AI models.
The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties similar to the standard contractual clauses between controllers and processors under the GDPR.
Additional resources
-
expand_more
EU AI Act resources
-
expand_more
General AI resources
Top 10 operational impacts of the EU AI Act
The full series in PDF format can be accessed here.
- Part 1: Subject matter, definitions, key actors and scope
- Part 2: Understanding and assessing risk
- Part 3: Obligations on providers of high-risk AI systems
- Part 4: Obligations on nonproviders of high-risk AI systems
- Part 5: Obligations for general-purpose AI models
- Part 6: Governance: EU and national stakeholders
- Part 7: AI assurance across the risk categories
- Part 8: Post-market monitoring, information sharing and enforcement
- Part 9: Regulatory implementation and application alongside EU digital strategy
- Part 10: Leveraging GDPR compliance