Resource Center / Resource Articles / Top 10 operational impacts of the EU AI Act
Top 10 operational impacts of the EU AI Act – Subject matter, definitions, key actors and scope
This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.
Published: July 2024
Contributors:
Download this series: The Top 10 operational impacts of the EU AI Act is available in PDF format.
Navigate by Topic
The EU AI Act is the result of years of political, legal and technical debate and negotiation. In a field as complex and quickly evolving as AI, this has the potential to complicate operational compliance, particularly when the law inevitably introduces novel interpretational questions. Our understanding of the AI Act's provisions and requirements will be shaped and refined by a series of standards and regulatory guidance expected over the next 18 months. However, with a series of obligations likely to apply well before this period and the lead time required to implement AI governance measures, organizations should already be looking to understand and interpret key concepts.
Why should the AI Act matter to your organization?
The AI Act aims to ensure the development and deployment of AI is safe, trustworthy, transparent and respectful of fundamental rights, while accounting for progress and innovation in this epoch-defining space. It creates harmonized EU rules for placing AI systems on the market, putting them into service and governing their use. The act prohibits certain AI practices outright and places specific obligations on operators of different AI systems and general-purpose AI models.
Like the EU General Data Protection Regulation, the AI Act has a wide territorial reach, impacting operators within and outside the EU. It provides for significant sanctions, including high financial penalties and a strong regulatory enforcement framework. In the years ahead, substantial parts of the AI Act are expected to become the gold standard for global AI regulation, making an early understanding of its requirements critical for organizations everywhere.
Key concepts and definitions
The AI Act includes 68 definitions. While some important definitions are entirely new, other terms like placing on the market, making available on the market, putting into service, substantial modification, intended purpose, importer and distributor are helpfully based on existing EU law, particularly EU product safety regulation. As a starting point, some of the key concepts and definitions organizations will need to understand in detail are:
AI system
The AI Act does not define the term AI but rather defines an AI system as, "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
This definition aligns with the one proposed by the Organisation for Economic Co-operation and Development and should be read cumulatively, considering each element. Under both definitions, an AI system must:
- Be machine-based.
- Be designed to operate with varying levels of autonomy.
- Have the ability to infer how to generate outputs from inputs received for explicit or implicit objectives and make decisions that can influence physical or virtual environments.
- Exhibit adaptiveness after deployment.
When interpreting this definition from a compliance perspective, it may be helpful to review the definitions of both AI and system independently, as a precursor to the elements above.
The AI Act's recitals on the notion of AI have evolved to prioritize inferences in particular, noting this typically includes machine learning and "logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved." Previous versions of the AI Act's text provide additional guidance on what these approaches may include.
Practically, it is expected commonly used and understood techniques of AI, such as deep learning, reinforcement learning and ML, computer vision, natural language processing and neural networks will fall within this definition.
While there may be edge cases for certain automated, rules-based software that may require specific assessment over time, such as some forms of robotic process automation, it is worth noting the AI Act clearly intends to exclude traditional software systems that do not meet the cumulative criteria in the definition. It is also likely intended to be narrower in scope than the concept of automated decision-making under the GDPR, which focuses on decisions made by automated means without human involvement but does not, for example, account for elements such as inference.
It is also important to bear in mind that an AI system is not the same as an AI model, which is not specifically defined under the AI Act. Though indirectly governed, the law clarifies that models are essential components of AI systems, not systems in and of themselves. As such, models should be seen as a critical part of the technical infrastructure required for an AI system to function but would require additional components, like a user interface, to generate usable outputs and collectively qualify as a regulated AI system.
General-purpose AI models
Though AI models are not defined, general-purpose AI models are. This term is used in the AI Act to refer to what may otherwise be understood as generative AI or foundation models. The approach taken to governing general-purpose AI has evolved over time, sometimes leading to heated debate on the impact of regulating one of the newest, most promising forms of AI on innovation in the EU. The final definition adopted considers the key functional characteristics of these models, primarily their generality and capability to perform a wide range of distinct tasks competently.
The AI Act defines a general-purpose AI model as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market."
Critiques of this definition include that the threshold for what constitutes a large amount of data, currently set at 1 billion parameters or more, may be too low and outdated considering the current state of the art. Another practical question to consider is what threshold, in the absence of guidance, to set for a wide range of distinctive tasks to be in scope. General-purpose AI models that have not yet been released, i.e., experimental or prototype models, are excluded from these obligations, which will apply only to models placed on the market. Lastly, the AI Act also includes provisions on general purpose AI systems, which are AI systems based on general-purpose AI models.
As discussed above, the AI Act primarily applies directly to AI systems and general-purpose AI models. Although it adopts a risk-based approach, some of the obligations, such as those relating to transparency, can apply across risk-categories of AI systems unless they qualify as one or more of the practices prohibited under the AI Act altogether. Most obligations, and corresponding liabilities, under the AI Act relate to the use of high-risk AI systems, and these themes will be covered in detail further in this series.
Who does the AI Act apply to?
Borrowing from product safety law, the AI Act applies to operators across the AI value chain. These include:
-
expand_more
Providers
These are the most heavily regulated operator under the AI Act. To qualify, providers must have developed an AI system or a general-purpose AI model, or had one developed on their behalf. They must also have "placed the AI system on the market" or "put the AI system into service." They may also be in scope if their place of establishment or location is in a third country, but outputs produced by their AI system are used in the EU. Lastly, the AI system or general-purpose AI model must be released in the provider's name or trademark to qualify. Most obligations under the AI Act apply to providers of high-risk AI systems.
-
expand_more
Deployers
This term refers to an individual or entity that uses an AI system under its authority, except during a personal, nonprofessional activity. Deployers may be established or located in the EU or, as with providers, in third countries, but they are in scope if outputs produced by their AI systems are used in the EU. In a business-to-consumer context, individual users of AI systems cannot be considered deployers under the AI Act. If deployers are acting on someone else's authority, as processors might under the GDPR for example, they would not qualify as deployers.
-
expand_more
Importers
These are neither providers nor deployers, but they are located or established in the EU and place AI systems on the EU market that bear the name or trademark of individuals or entities based in third countries. Importers are the first to make these third-country AI systems available in the EU.
-
expand_more
Distributors
These individuals or entities include actors in the AI supply chain, besides providers and importers, that make AI systems available in the EU as a follow-on action, after the AI system is imported and placed on the market.
-
expand_more
Product manufacturers
Product manufacturers place AI systems on the market or put them into service together with their own product.
-
expand_more
Authorized representatives
Similar to requirements under the GDPR, authorized representatives are located in the EU for providers located outside the EU.
Beyond this list, the AI Act naturally also applies to individuals in the EU, framed as affected persons, from the perspective of having and exercising rights under the law. While a definition for affected persons did not make it to the final text of the AI Act, they should generally be understood as individuals, not only citizens, in the EU who might be subjected to or otherwise affected by AI systems.
When assessing the role your organization may play as an operator under the AI Act, it is important to be mindful that:
- An operator may be considered to hold multiple roles, such as provider and deployer, simultaneously. In these scenarios, they will need fulfil the relevant obligations associated with those roles cumulatively.
- More than one entity may hold the same role simultaneously, e.g., two providers for one AI system.
- As with controller and processor designations under the GDPR, although roles may be assigned and ringfenced contractually, the true determinant of a party's role will be the role they perform in practice.
- An operator other than a provider may be deemed to be a provider in certain circumstances.
What is not in scope?
Traditional software that does not meet the cumulative criteria set out in the AI system definition will not be in the scope of the AI Act. Additionally, the AI Act:
- Recognizes the unique nature of free and open-source AI software, exempting it from provisions to encourage innovation and collaboration, subject to certain conditions and exclusions. However, this exemption does not apply to AI systems placed on the market or put into service as high-risk AI systems, prohibited AI systems listed under Article 5 or AI systems that fall within the scope of certain Article 50 transparency requirements.
- Sets out exclusions by sector, acknowledging certain areas require a different regulatory approach. Notably, AI systems used solely for scientific research and development are excluded from the scope of the AI Act, allowing the academic and scientific community to pursue advancements in AI more flexibly. Similarly, it does not govern AI systems developed or deployed for military, defense or national security purposes.
- Excludes deployers of AI systems who are natural persons using AI systems for purely personal, nonprofessional activities.
Conclusion
The implications of the AI Act are wide-ranging for both organizations and individuals in the EU and worldwide. As has been widely discussed, the new law takes a risk-based approach to regulating AI, discussed in detail in the next article in this series, and relies on a combination of product-safety regulation and fundamental rights, though several key concepts are new. Given the AI Act's extraterritorial scope and its many inevitable overlaps with the GDPR, product safety, consumer protection, fundamental rights and digital regulation, considered and comprehensive governance and compliance programs will be needed, particularly for organizations that, like AI, operate across borders.
Additional resources
-
expand_more
EU AI Act resources
-
expand_more
General AI resources
Top 10 operational impacts of the EU AI Act
The full series in PDF format can be accessed here.
- Part 1: Subject matter, definitions, key actors and scope
- Part 2: Understanding and assessing risk
- Part 3: Obligations on providers of high-risk AI systems
- Part 4: Obligations on nonproviders of high-risk AI systems
- Part 5: Obligations for general-purpose AI models
- Part 6: Governance: EU and national stakeholders
- Part 7: AI assurance across the risk categories
- Part 8: Post-market monitoring, information sharing and enforcement
- Part 9: Regulatory implementation and application alongside EU digital strategy
- Part 10: Leveraging GDPR compliance