This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.


Published: October 2024


Contributors:


Navigate by Topic

Chapter IX of the EU Artificial Intelligence Act includes post-market monitoring, information sharing and enforcement provisions. Readers familiar with the EU General Data Protection Regulation will see some similarities but mostly important differences, as the AI Act is primarily a product safety regulation heavily inspired by the structure of product safety laws in the New Legislative Framework, such as the Medical Device Regulation.


Post-market monitoring obligations under the AI Act

As a regulation primarily focused on product safety, the AI Act includes ex-ante and ex-post obligations. The rationale is to ensure continuous compliance of AI systems with the AI Act throughout their life cycles, which is important as many AI systems change after implementation, such as through continuous learning after deployment. This can make it difficult to comprehensively foresee all risks the system may present in practice when it is developed.

Article 72 of the act requires providers of high-risk AI systems to collect and review experience gained from using their AI systems after they have been placed on the market or put into service. Such information may sometimes be provided by deployers but can also be collected through alternative sources, such as affected persons or competent authorities. The purpose of these post-market obligations is to ensure AI systems continuously remain compliant after the provider places them on the market, with the requirements that apply to high-risk AI systems under Section 2 of Chapter III. Providers can, and should, use the findings to improve their systems, as well as the design and development process, and take any possible corrective action when necessary.

Article 72 notes the post-market monitoring system should be based on a post-market monitoring plan, which should be part of system's technical documentation and that providers are obliged to keep up to date throughout its life cycle. To assist providers with setting up such a plan, the AI Act requires the European Commission to adopt an implementing act with detailed provisions that establish a template for the post-market monitoring plan by 2 Feb. 2026.


Information obligations for serious incidents

Article 73 obliges providers of high-risk AI systems to report serious incidents to the market surveillance authorities of the member states where the incident occurred. Procedures for such incident notification should be included in the AI system's quality management system, per Article 17(1)(i). In principle, deployers of high-risk AI systems should immediately inform the system's provider of any serious incidents identified. If they cannot reach the provider, they should instead follow the procedure in Article 73 and notify the market surveillance authority, per Article 26(5).

It is important to note infringements of fundamental rights are viewed as serious incidents, which is cause for operators of high-risk systems to pay close attention. The fundamental rights mentioned here are primarily those included in the Charter of Fundamental Rights of the European Union, which includes a long list of rights ranging from traditional civil rights, such as the right to life, to rights that may often be overlooked, such as the right to consumer protection. On the other hand, a serious incident only occurs when it leads to a breach, not when this consequence is a mere possibility.

Unlike the GDPR, the AI Act does not provide for a one-stop-shop system. As such, providers may not have the option of reporting solely to a lead supervisory authority when the AI system is deployed in multiple member states. If a serious incident affects multiple member states, providers must notify the market surveillance authority in each member state where the AI system is available.

In principle, providers must report serious incidents immediately but no later than 15 days after establishing a link between the incident and the AI system or a reasonable likelihood of such a link. Similar to the GDPR, an interim notification may be sent instead if a complete report is unavailable at the time of initial reporting. The act is unclear about whether providers can submit multiple interim reports before the final report. It also does not specify a deadline for filing the final report. Per Article 73(7), the Commission must publish guidance to facilitate compliance on reporting serious incidents at the latest 12 months after the act enters into force, by 2 Aug. 2025, which could clarify this issue.

After filing the report, providers must perform the necessary investigations of the serious incident and the AI system involved. This includes performing a risk assessment and taking any necessary corrective actions. This investigation cannot include altering the AI system in a way that may affect any subsequent evaluation of the causes of the incident before informing the competent authorities of such action.

After receiving a notification, the market surveillance authority will inform the relevant national public authorities or bodies and, within seven days, take appropriate measures. When there are no other effective means to eliminate the serious risk, the authority can withdraw or recall the AI system or prohibit it from being made available on the market. The competent authorities will also notify the European Commission. If the serious incident involves infringing fundamental rights, the market surveillance authority or authorities must also inform the national fundamental rights authority or authorities.

Exemptions from notification apply when the AI system is subject to the Medical Device Regulation or the In Vitro Diagnostic Medical Device Regulation. In such cases, notification under the AI Act must only be done when the incident concerns an infringement of fundamental rights, e.g., the right to be free from discrimination, which is not covered by these regulations.


Enforcement: A fragmented surveillance landscape

The AI Act's enforcement will differ from the GDPR's. While data protection authorities will likely play an important supervisory role, the enforcement of the AI Act will involve a number of other authorities at both the national and EU levels. Notably, there is no one-stop-shop system, which means organizations do not have the option to appoint a single lead supervisory authority for their businesses. This does not come as a surprise entirely, as the one-stop-shop system is not typically applied in product safety regulation. However, it does mean organizations may face a complex supervisory landscape.

Section 3 of Chapter IX lays down the rules for enforcement by setting defined rules on the competence of the different national competent authorities. As a preset, Regulation (EU) 2019/1020 is declared applicable to AI systems covered by the AI Act, which means all provisions apply mutatis mutandis to the market surveillance of AI systems. Member states can designate more than one market surveillance authority for the surveillance of the AI Act, provided their respective duties are clearly defined, and appropriate communication and coordination mechanisms are in place. It appears most member states will use the option to appoint multiple market surveillance authorities and, additionally, multiple sector-specific supervisory authorities.

Market surveillance authorities will be responsible for the supervision and enforcement of the AI Act. Among other things, they are tasked with overseeing the testing of AI systems in real-world conditions in accordance with the AI Act. They also conduct evaluations of AI systems that potentially pose risks to people's health, safety or fundamental rights. Market surveillance authorities are also responsible for handling serious incident notifications. They have all the powers laid out in Article 14 of the Market Surveillance Regulation, which includes the powers to carry out unannounced on-site inspections, acquire product samples, reverse-engineer them, identify noncompliance, obtain evidence, recall AI systems and impose penalties. Additionally, Articles 74(13) and 74(14) provide for the powers to be granted full access by providers to the documentation as well as the training, validation and testing datasets used for the development of high-risk AI systems and, under certain conditions, to the source code of the high-risk AI system.

For the most part, member states are free to appoint the market surveillance authorities of their choice. However, Article 74 does designate specific authorities for certain areas of surveillance. The market surveillance authorities set out for AI systems regulated by the directives and regulations in Section A of Annex I, also referred to as the New Legislative Framework, will generally be competent under the AI Act. Annex I(A) covers a wide array of products, including machinery, toys, radio equipment, in vitro medical devices, two- or three-wheel vehicles and quadricycles, and motor vehicles.

Operators of high-risk AI systems that fall within this scope will generally not have to deal with additional market surveillance authorities. The existing procedures, e.g., dealing with risks and formal noncompliance pursuant to these regulations, will often apply instead of those pursuant to the AI Act. For AI systems placed on the market, put into service or used by financial institutions, the market surveillance authorities, under applicable financial services law, will generally act as market surveillance authorities insofar as a direct connection exists with regulated financial services. Member states can appoint a different market surveillance authority if appropriate and only insofar as coordination is ensured.

Additionally, Article 74(8) appoints the national authorities, designated through either the GDPR or Law Enforcement Directive, usually the national DPA in both cases, as the competent market surveillance authorities for the high-risk AI systems in the following areas:

  • Law enforcement under Annex III, point 6.
  • Migration, asylum and border control management per Annex III, point 7.
  • Administration of justice and democratic processes under Annex III, point 8.
  • Biometrics under Annex III, point 1, but only insofar as they are also used in any of the above areas.

Member states are not allowed to designate any authority other than those appointed based on the GDPR or Law Enforcement Directive.

Besides the foregoing, other competent national authorities can be appointed because they already have competence at a certain level or regarding specific topics, including having a preexisting mandate, such as if they are already national cybersecurity supervisors, or being allotted competence under Article 77 because they protect fundamental rights. Overall, the AI Act will often present a fragmented supervisory landscape, which will require coordination between different authorities to function properly.

At the EU level, however, the AI Act takes a centralized approach. It allocates supervisory powers to two entities that will function as one-stop shops: the European Data Protection Supervisor and the EU AI Office, which was established as a part of the European Commission in May 2024. Pursuant to Article 74(9), the EDPS will be the market surveillance authority for EU institutions, bodies, offices or agencies without room for derogation. Article 88 designates the AI Office as the competent authority for monitoring and supervising general-purpose AI models. Insofar as AI systems are based on general-purpose AI models, the AI Office will also have the power to monitor and supervise compliance, provided the same provider develops the model and the system. The AI Office can monitor compliance, for instance by requesting documentation or conducting evaluations, but can also act based on complaints by downstream providers or alerts of systemic risks by the Scientific Panel of Independent Experts. Aside from its surveillance duties, the AI Office is set to focus on encouraging and facilitating codes of practice to contribute to the proper application of the AI Act to general-purpose AI models, provide coordination for joint investigations between market surveillance authorities from different member states, and work closely with the European AI Board to support national competent authorities in the establishment and development of regulatory sandboxes.


Few remedies for affected persons

Unlike the GDPR, the AI Act does not offer involved persons many remedies to invoke when providers or deployers have breached an obligation to their detriment. Essentially, the act offers only two rights to affected persons: the right to lodge a complaint with the relevant market surveillance authority and the right to receive an explanation of individual decision-making, which is mainly based on the output of a high-risk AI system.

The right to lodge a complaint can be found in Article 85 of the act and can be exercised by both natural and legal persons. If a person has grounds to consider that there has been an infringement of the act, they can submit their reasoned complaint to the relevant market surveillance authority. The authority will use those complaints for the purpose of conducting market surveillance activities. Authorities can choose to handle complaints according to their own established procedures. A similar right is provided to downstream providers of general-purpose AI models, i.e., parties that use such models to build AI systems, per Article 89 of the act. Downstream providers can submit duly reasoned complaints to the AI Office when they believe a general-purpose AI provider has infringed the AI Act.

The right to an explanation of individual decision-making can be found in Article 86. While reminiscent of the GDPR's right to not be subject to automated decision-making, the right established in the AI Act does not generally prohibit automated decisions based on AI system outputs but instead provides affected individuals the right to an individualized explanation. This right can be invoked whether the decision was automated or nonautomated within the meaning of GDPR Article 22.

Any affected person subject to a decision made by a deployer on the basis of the output from a high-risk AI system, as listed in Annex III, that produces legal effects or similarly significantly affects that person in a way they consider to have an adverse impact on their health, safety or fundamental rights, has the right to obtain clear and meaningful explanations of the AI system's role in the decision-making procedure and the main elements of the decision from the deployer. Such an explanation should be clear and meaningful and should provide a basis on which the affected persons are able to exercise their rights. Although the provision does not explicitly state whether it applies solely to natural or legal persons, it seems likely that the right extends to both. The right to explanation does not apply, likely for security reasons, if the AI system in question is intended to be used as a safety component in the management and operation of critical infrastructure.

The act does not offer a right to a specific remedy, such as the right to compensation in Article 82 of the GDPR. However, it is worth noting individuals can rely on other regulations, such as the GDPR and liability laws, to address any harm caused by AI systems. The EU is currently working on an AI Liability Directive to establish more effective means for individuals seeking compensation for damages caused by AI products. If adopted, this directive will make it easier for affected persons to recover damage they suffer due to the deployment of an AI system.


Fines

The rules surrounding penalties are relatively similar to those of the GDPR. Member states must lay down their own rules on enforcement measures, which must be effective, proportionate and dissuasive, and should take into account the Commission's guidelines once adopted.

Under Article 99, the maximum fine amounts to 35 million euros or 7% of the worldwide annual turnover, whichever is higher. This maximum amount applies only to breaches of Article 5, i.e., placing prohibited AI systems on the market or putting them into service. For most other breaches of the AI Act, the maximum fine amounts to 15 million euros or 3% of the worldwide annual turnover. An additional category of fines, with a maximum of 7.5 million euros or 1% of the worldwide annual turnover, is introduced for supplying incorrect, incomplete or misleading information to notified bodies or national competent authorities. This is important for providers and deployers to consider when reporting a serious incident to the market surveillance authority or when requested to share certain information during an investigation. Article 99 further sums up a set of circumstances, similar to those set out in the GDPR, that authorities need to consider when deciding whether to impose an administrative fine and when deciding on the amount of the fine.

The AI Act also contains specific provisions for fines for providers of general-purpose AI models. The Commission can fine these providers up to 15 million euros or 3% of their worldwide annual turnover, if it finds the provider intentionally or negligently:

  • Infringed upon provisions of the AI Act.
  • Failed to comply with a request for a document or for information, or supplied incorrect, incomplete or misleading information.
  • Failed to comply with a measure requested under Article 93.
  • Failed to provide the Commission access to the general-purpose AI model, either with or without systemic risk, to conduct an evaluation pursuant to Article 92.

Just like those applicable to high-risk AI systems, these fines must be effective, proportionate and dissuasive. If a general-purpose AI provider decides to challenge the fine, they will need to turn to the Court of Justice of the European Union, which has unlimited jurisdiction to review the Commission's fining decisions and may cancel, reduce or increase the fine imposed.


Additional resources


Top 10 operational impacts of the EU AI Act

Published

Coming Soon

  • Part 9: Regulatory implementation and application alongside EU digital strategy
  • Part 10: Leveraging GDPR compliance


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 3

Submit for CPEs