Resource Center / Resource Articles / Top 10 operational impacts of the EU AI Act
Top 10 operational impacts of the EU AI Act – Governance: EU and national stakeholders
This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.
Published: September 2024
Contributors:
Download this series: The Top 10 operational impacts of the EU AI Act is available in PDF format.
Navigate by Topic
The EU AI Act sets up an intricate governance structure with various stakeholders at both the EU and national levels to ensure its effective and coherent implementation and enforcement. Chapter VII of the AI Act provides an overview of this structure but certain details concerning specific roles, tasks and interactions can be found beyond this section. Also, Chapter VII does not mention all the actors involved in the act's implementation and enforcement.
This article focuses on each body, outlining its composition and main competences to help organizations better understand the AI Act's governance structure. While indicating every task of each stakeholder is beyond the scope of this article, the annex below navigates the AI Act's text to find their responsibilities.
Who is responsible for the AI Act's governance at the EU level?
Besides initiating the AI Act, the European Commission is also an important facilitator of its implementation and enforcement. Along with other bodies, many of which are newly established at the EU level, it aims to ensure consistent application of the AI Act across the EU.
-
expand_more
AI Office
The AI Act does not provide much information on the composition of the AI Office. Its current set up, effective as of 16 June, only became clear months after the publication of the European Commission decision that officially established it in January 2024. The AI Office was not built completely from scratch. The Commission renamed and reorganized an existing unit, the Directorate-General for Communications Networks, Content and Technology Directorate A for Artificial Intelligence and Digital Industry, with five topic-specific units and two sections with an advisory function. The AI Office is led by Lucilla Sioli, former Directorate A director.
In its initial stages, the AI Act envisioned the European Commission and the AI Office as having two distinct roles. The notion of the AI Office has evolved since then. As previously stated, it is currently set up as part of the Commission's administrative structure and is therefore a component of the Commission. According to the final AI Act text, the AI Office is "the Commission's function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024."
However, the AI Act identifies and refers to both the Commission and the AI Office throughout its text. The confusion stemming from the text must be recognized. It is no more than a consequence of expeditious negotiations and pressure to publish the AI Act before summer recess, resulting in a lack of time to properly clean up its final wording. To avoid further confusion, this article will refer to the Commission's and the AI Office's competences under the AI Act as the AI Office.
The AI Office plays an important role in realizing the AI Act's goals and is therefore assigned a multitude of responsibilities for facilitating its implementation, including:
- Issuing standardization requests to European standardization organizations that must translate the AI Act's rules and obligations into specific technical requirements.
- Adopting secondary legislation, such as delegated and implementing acts, to clarify the AI Act's rules and obligations and to ensure it stays relevant. They will cover topics including criteria and use cases for high-risk AI and common specifications for areas without suitable harmonized standards.
- Issuing guidelines on practical implementation of the AI Act, including on the application of requirements and obligations for high-risk AI systems.
- Setting up and maintaining various databases, including on general-purpose AI models with systemic risks, high-risk AI systems listed in Annex III, information on notified bodies and AI regulatory sandboxes.
- Ensuring effective support mechanisms for national competent authorities, for instance by facilitating the creation and operation of AI regulatory sandboxes and coordinating joint investigations of national market surveillance authorities.
- Supporting relevant sectoral bodies with implementing rules on prohibited AI practices and high-risk AI systems.
- Facilitating the drawing up of codes of conduct and codes of practice at the EU level and monitoring the implementation and evaluation of the latter.
- Facilitating compliance with the AI Act, particularly of small- and medium-sized enterprises, including by providing standardized templates upon the AI Board's request and raising awareness about the AI Act's obligations.
- Ensuring the rules of the AI Act and other EU legislation in the digital field where the Commission holds supervisory and enforcement powers, such as the Digital Markets Act and Digital Services Act, are applied to AI systems in a coordinated manner.
- Assisting other bodies at the EU level with organizational matters. The AI Office acts as the secretariat for the AI Board and provides administrative support for the Advisory Forum and the Scientific Panel of Independent Experts.
The AI Office is also tasked with the supervision, monitoring and enforcement of rules concerning general-purpose AI models and is supported in these tasks by the Scientific Panel. Specifically, the AI Office is tasked with:
- Developing resources for evaluating general-purpose AI capabilities and monitoring the emergence of unforeseen general-purpose AI risks.
- Conducting investigations and requesting information from the operators of general-purpose AI models.
- Adopting mitigation measures, corrective measures and sanctions in case of infringements.
- Acting as a market surveillance authority for AI systems based on general-purpose AI models when the model and system are provided by the same provider.
The AI Office periodically reviews certain aspects of the AI Act and will evaluate it as a whole five years after it enters into force and every four years after. It also evaluates various decisions adopted at a national level, including:
- Measures adopted by national market surveillance authorities against operators of AI systems. When there is a dispute between member states concerning their suitability, it has the decisive authority to determine whether these measures must be followed in other member states or whether they are inadequate and must be withdrawn.
- Instances in which market surveillance authorities authorize the deployment of high-risk AI systems without prior conformity assessments.
- The competence of notified bodies. It may investigate their competence when in doubt and even adopt corrective measures.
The Commission decision establishing the AI Office states it will work in close cooperation with various stakeholders at sectoral, national and EU levels when carrying out its tasks. However, their relationship with the AI Office and other EU bodies is not clear cut. For instance, in a situation that concerns financial services and AI, questions about how the competencies of the AI Office and the European Central Bank will intersect arise.
It should be noted the competences of the AI Office are not restricted to the AI Act. The AI Office has a central role in the EU concerning the development, launch and use of trustworthy AI. It is also tasked with promoting the EU approach to trustworthy AI on the international stage.
-
expand_more
AI Board
The AI Board was established to ensure consistent and effective application of the AI Act across the EU. It provides a platform for dialogue and coordination between national competent authorities for sharing expertise and best practices, identifying common issues and ways to collectively address them, and working to harmonize administrative practices, for instance concerning derogation from conformity assessment procedures and the functioning of AI regulatory sandboxes.
The AI Board advises the AI Office and member states on the AI Act's implementation. It issues recommendations and opinions on various matters, including on:
- Qualified alerts regarding general-purpose AI models.
- The development and application of codes of conduct and codes of practice.
- The use of harmonized standards.
- The need to revise certain sections of the AI Act.
- AI trends and international matters on AI.
The AI Board is composed of one representative per EU member state who serves a three-year mandate, the European Data Protection Supervisor as an observer and the AI Office without voting rights. Depending on the meeting's agendas, an invitation may be extended to other national and EU bodies. The AI Board consists of two standing subgroups, though additional standing or temporary subgroups may be established if needed.
-
expand_more
AI Advisory Forum
Upon request, the AI Advisory Forum provides the AI Board and the AI Office with technical expertise, recommendations, opinions and other written contributions on matters including harmonized standards and commons specifications. It may also set up standing or temporary subgroups to analyze specific AI Act-related issues. Anyone interested in the yearly activities of the AI Advisory Forum will be able to consult its publicly accessible reports.
The AI Advisory Forum is comprised of members appointed by the AI Office with AI expertise representing a balanced selection of stakeholders from industry, including startups and SMEs, civil society and academia. The EU Fundamental Rights Agency, the EU Agency for Cybersecurity and the European standardization organizations are its permanent members. This balanced representation ensures both commercial and noncommercial interests are considered when contributions from the AI Advisory Forum are requested. Some suggest the European AI Alliance, a European Commission initiative with the goal of creating an open policy dialogue on AI, will take up the role of the AI Advisory Forum, but that is not yet confirmed.
-
expand_more
Scientific Panel of Independent Experts
The main role of the Scientific Panel of Independent Experts is to support the AI Office in monitoring general-purpose AI models. Its tasks include:
- Alerting the AI Office of general-purpose AI models with systemic risks at the EU level.
- Contributing to the development of resources for evaluating general-purpose AI capabilities and other tools and templates.
- Advising the AI Office on the classification of general-purpose AI models.
- Supporting market surveillance authorities and their cross-border activities.
- Providing EU member states with access to their pool of experts, possibly for a fee.
The panel consists of experts selected by the AI Office who are knowledgeable in a range of topics in the field of AI. They must be able to demonstrate such scientific or technical expertise. Additionally, they must be independent from any provider of AI systems or general-purpose AI models, perform their tasks fully independently and objectively, and respect confidentiality requirements. The composition of the panel must be balanced geographically and gender-wise to ensure a fair EU-wide representation.
-
expand_more
EDPS
The EDPS' principal role is to ensure the EU bodies' compliance with European data protection rules. The AI Act assigns it additional competences by designating it as a market surveillance authority for the EU bodies concerning their implementation of the AI Act. In this role, it may establish an AI regulatory sandbox to provide the EU bodies with a safe testing environment. In case of their noncompliance, the EDPS may impose administrative fines.
-
expand_more
EU AI testing support structures
The EU AI testing support structures are bodies, either at national or EU level, that are designated by the AI Office to support market surveillance actions on AI in the EU. They increase the capacity of national market surveillance authorities by testing products upon their or the AI Office's request and by developing new techniques and methods of analysis. They must also provide independent technical or scientific advice when requested by the AI Office, market surveillance authorities or the AI Board.
-
expand_more
European standardization organizations
European standardization organizations, such as the European Telecommunications Standards Institute, the European Committee for Standardization and the European Committee for Electrotechnical Standardization, play an important role in supporting the implementation of EU legislation and policies, as they develop standards that facilitate compliance with their rules and obligations.
When it comes to the AI Act, the latter two bodies have established the Joint Technical Committee 21 on AI. The committee is divided into topic-specific working groups of experts, which, upon receiving a standardization request from the AI Office, work on developing harmonized standards that translate the rules and obligations of the AI Act into concrete technical requirements. Once such a standard is developed and awarded a harmonized standard status, it may then be voluntarily adopted to showcase compliance with a specific requirement of the AI Act.
Who is responsible for the AI Act's governance at the national level?
Member states are responsible for implementing and enforcing the AI Act on a national level. They are supported in this role by national competent authorities and other bodies established at a national level.
-
expand_more
Member states
Member states must designate national competent authorities 12 months after the AI Act enters into force and ensure they have sufficient resources, sufficient competences and a proper infrastructure.
Member states are also responsible for establishing rules on the AI Act's enforcement measures, such as penalties and administrative fines but also warnings and other nonmonetary measures. The rules must be in accordance with the requirements set out in the AI Act itself, as well as the AI Office's guidelines on this matter.
Member states also have the power, with certain limits, to put laws in place to authorize the use of real-time biometric identification systems fully or partially in publicly accessible spaces for the purpose of law enforcement. They may also introduce more restrictive laws on the use of real-time remote and post-remote biometric identification systems.
-
expand_more
National competent authorities
Member states must designate at least one market surveillance authority and one notifying authority, as well as choose one market surveillance authority as a single point of contact for matters concerning the AI Act. Depending on their specific needs, member states may designate more than one of each type of authority.
At this moment, several approaches are emerging. Denmark appointed its Agency for Digital Government, Italy's national data protection authority expressed its interest in taking up the role and Spain chose to create a new authority called the Agencia Española de Supervisión de la Inteligencia Artificial from scratch. Regardless of the approach, significant capacity building will be needed to equip authorities for their new responsibilities, whether in terms of staffing, budget or expertise. For instance, DPAs will need to acquire competencies not in their typical lines of work, akin to product-safety supervision and enforcement.
While national competent authorities oversee the implementation of the AI Act, they also facilitate it. They must establish new or participate in existing AI regulatory sandboxes individually or jointly with other member states' competent authorities, supervise their use and report on it to the AI Office and the AI Board. SMEs have to be given priority access to such AI regulatory sandboxes. In their facilitatory role, national competent authorities must also provide guidance, especially to SMEs, on the AI Act's implementation and assist with drawing up codes of conduct. They must ensure their independence and impartiality in carrying out their tasks.
-
expand_more
Market surveillance authorities
Market surveillance authorities monitor and investigate AI systems' compliance with the AI Act, including classifying AI systems as nonhigh risk. They can request any information that may be relevant to their investigations from providers and deployers. They can carry out such investigations and other activities jointly with other member-state market surveillance authorities, particularly when it comes to high-risk AI systems that present serious risks in cross-border cases or in cooperation with the AI Office in certain cases concerning general-purpose AI or high-risk AI. They must also cooperate and coordinate their activities with sectoral market surveillance authorities when relevant. However, the relationship between different authorities is somewhat unclear, which may create issues in situations where competences overlap.
In the event of noncompliance, market surveillance authorities adopt measures, including corrective action and restricting or prohibiting AI systems from the EU market. In the latter case, the authority informs the AI Office and other member-state authorities of such noncompliance and the measures taken. The same goes for when noncompliance is not restricted to the national territory of the market surveillance authority concerned. If the AI Office or other national authorities object to such measures, the lead market surveillance authority must consult the AI Office and the operators concerned. If the AI Office then deems the measures appropriate, they must be adopted by other member-state authorities and if not, they must be withdrawn.
If a high-risk AI system is found to be compliant but presents a risk to the health or safety of people, to fundamental rights, or to other aspects of public interest protection, the market surveillance authority should request it to eliminate that risk through appropriate measures. The market surveillance authority must inform the AI Office and other member states of the high-risk AI system in question, the risk it presents and the measures taken. It must enter into consultation with the AI Office and the member states and operators concerned. The AI Office may then request the adoption of different measures as necessary.
Market surveillance authorities must not only track compliance but also handle complaints they receive from companies and individuals. The procedures for doing so are left to authorities themselves. Additionally, they must collect serious incident reports from high-risk AI system providers and, in certain cases, notify such incidents to authorities protecting fundamental rights.
Apart from overseeing AI systems compliance with the AI Act, market surveillance authorities may authorize deploying high-risk AI systems without prior conformity assessments for exceptional reasons, such as public security, and for a limited period while completing the required conformity assessment procedures. In such cases, they must inform the AI Office and other member states and, if objections are raised, enter consultations with the AI Office. The AI Office may request the market surveillance authority withdraw the authorization if it is deemed unjustified.
Additionally, market surveillance authorities supervise the testing of AI systems in real-world conditions, handle applications for testing high-risk AI systems in real-world conditions outside AI regulatory sandboxes and monitor the testing when needed.
Finally, market surveillance authorities are required to share any relevant findings from their activities with the AI Office and other relevant stakeholders, such as competition authorities, and report to the AI Office on the use of real-time biometric identification systems.
With such pivotal responsibilities under the AI Act, it is fair to say market surveillance authorities are the central point of interest for AI system operators in the EU.
-
expand_more
Notifying authorities
Notifying authorities assess, designate, notify and monitor conformity assessment bodies. They develop procedures for such activities collectively with other member-state notifying authorities and must generally coordinate their activities and cooperate, including by exchanging best practices. They must also ensure bodies notified by them participate in the sectoral group of notified bodies to enhance coordination and cooperation.
In case of doubt, notifying authorities investigate notified bodies' competences and take necessary measures, including suspending or withdrawing notifications. They must communicate all notifications and any changes to the AI Office and other member states.
There should be no conflict of interest between notifying authorities and conformity assessment bodies. Notifying authorities must respect confidentiality obligations and perform their duties objectively and impartially, for instance by having different people carry out assessing and notifying activities.
-
expand_more
Notified bodies
Notified bodies are conformity assessment bodies accredited to perform conformity assessment activities, such as testing, inspecting and certifying high-risk AI systems. They also determine the procedures for carrying out such activities. They must cooperate and coordinate with other notified bodies in the form of a sectoral group of notified bodies.
Notified bodies may perform conformity assessment procedures fully or partially through subcontractors or subsidiaries that comply with the same requirements applicable to them. In such cases, notified bodies must make the information public and inform notifying authorities.
To be accredited as a notified body, an organization must fulfil certain requirements. For instance, it must:
- Be established in an EU member state. In certain cases, third-country establishments may be authorized to perform notified bodies' activities.
- Be independent from providers of AI systems under conformity assessments and their competitors.
- Not be directly involved in designing, developing, marketing or using high-risk AI systems, or represent parties that are.
- Ensure expertise, impartiality, objectivity, confidentiality and independence of its activities, safeguarded by documented procedures.
- Provide all relevant documentation confirming its activities and competences to the notifying authority of the country of its establishment.
- Be informed about current relevant standards, for instance through direct or representative participation in European standardization organizations.
-
expand_more
National authorities protecting fundamental rights
As the protection of fundamental rights is crucial under the AI Act, national public authorities protecting them also play a role in the act's enforcement. When such an authority suspects the use of a high-risk AI system identified in Annex III may breach EU fundamental rights obligations, it can request and access any documentation created or maintained under the AI Act to determine the existence of such a breach, while ensuring confidentiality obligations are respected. If a request is made, the authority protecting fundamental rights must inform the relevant market surveillance authority. It may request the market surveillance authority to perform technical testing of the AI system in question if the documentation obtained is not sufficient to identify a breach.
Member states must publish and maintain a public list of national public authorities that protect fundamental rights.
-
expand_more
DPAs
While EU member states are free to designate DPAs as their national competent authorities responsible for implementing and enforcing the AI Act, they are already assigned the task of a market surveillance authority concerning certain high-risk AI systems, including those listed in points 6, 7 and 8 of Annex III. In addition, DPAs are involved in the operation and supervision of AI regulatory sandboxes when they are used by AI systems that process personal data.
They must also gather information on the use of real-time and post-remote biometric identification systems and report annually on the use of the former to the AI Office.
-
expand_more
Law enforcement or civil protection authorities
Law enforcement or civil protection authorities are given the power to use real-time remote biometric identification systems in publicly accessible spaces in specific and limited situations only when permitted by member-state law. In addition, certain requirements must be fulfilled:
- A fundamental rights impact assessment must be completed before such use.
- The use must be preauthorized by a judicial or independent administrative authority, unless it concerns a situation of urgency.
- Real-time remote biometric identification systems must be registered in the EU database.
- Each use of such a system must be notified to the relevant market surveillance authorities and DPAs.
The AI Office reviews such authorizations and may deem them unjustified. In such cases, their use must be stopped and resulting outputs must be discarded immediately.
Furthermore, the AI Act allows law enforcement or civil protection authorities to deploy specific high-risk AI systems without preauthorization by a market surveillance authority. However, this is only allowed for exceptional reasons, including threats to public security or the safety of individuals. Even if such conditions are met, the authority in question must request the authorization without undue delay and, if it is rejected, immediately stop the use of the system and discard resulting outputs.
-
expand_more
Judicial authorities or independent administrative bodies
Judicial authorities or independent administrative bodies can authorize the deployment of real-time remote biometric identification systems in publicly accessible spaces for law enforcement in specific and limited situations only when permitted by law in the member state concerned.
This section outlines the different EU and national stakeholders, and the articles and recitals of the EU AI Act in which their competences and compositions are referenced.
-
expand_more
AI Advisory Forum
-
expand_more
AI Board
-
expand_more
DPAs
-
expand_more
EDPS
-
expand_more
EU AI testing support structures
-
expand_more
European Commission through the AI Office
-
expand_more
European standardization organizations
-
expand_more
Judicial authorities or independent administrative bodies
-
expand_more
Law enforcement or civil protection authorities
-
expand_more
Market surveillance authorities
-
expand_more
Member states
-
expand_more
National authorities protecting fundamental rights
-
expand_more
National competent authorities
-
expand_more
Notified bodies
-
expand_more
Notifying authorities
-
expand_more
Scientific Panel of Independent Experts
-
expand_more
EU AI Act resources
-
expand_more
General AI resources
Top 10 operational impacts of the EU AI Act
The full series in PDF format can be accessed here.
- Part 1: Subject matter, definitions, key actors and scope
- Part 2: Understanding and assessing risk
- Part 3: Obligations on providers of high-risk AI systems
- Part 4: Obligations on nonproviders of high-risk AI systems
- Part 5: Obligations for general-purpose AI models
- Part 6: Governance: EU and national stakeholders
- Part 7: AI assurance across the risk categories
- Part 8: Post-market monitoring, information sharing and enforcement
- Part 9: Regulatory implementation and application alongside EU digital strategy
- Part 10: Leveraging GDPR compliance