Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Despite industry pressure and its previous signals, the European Commission confirmed the EU Artificial Intelligence Act will take effect for general-purpose AI models beginning 2 Aug.

A lack of published guidelines defining general-purpose AI models led to speculation that AI Act enforcement would be delayed. Meeting its obligation under Article 96(1), the Commission published the General-Purpose AI Code of Practice on 10 July to clarify key concepts and enable the enforcement of obligations on general-purpose AI model providers by the originally established deadline.

What are general-purpose AI models?

To understand general-purpose AI models, it is important to recognize the main differences between "AI systems" and "general-purpose AI models" under the AI Act.

AI systems are applications — often referred to as agents — that may, but don't have to, build on a general-purpose AI model. General-purpose AI models serve as the infrastructure layer that requires additional components, such as a user interface, to function as AI systems. AI models are typically integrated into and become part of AI systems.

Article 3(63) defines general-purpose AI models as being trained on large amounts of data, "capable of competently performing a wide range of distinct tasks" and able to be "integrated into a variety of downstream systems or applications."

The General-Purpose AI Code of Practice sets a clear numeric threshold — 10 to the power 23 floating-point operations per second — that makes it easy for providers to determine if they operate a general-purpose AI model. The value is based on the computational resources used to train the model and is roughly the amount needed to train a model with one billion parameters on a large amount of data.

In terms of generality, the code notes that language-generating models — those producing text, code or audio including speech — are typically more capable of competently performing a wider range of tasks than other models. Image and video models usually have narrower uses than language models but can still qualify as general-purpose AI models. Text-to-image and text-to-video models often meet the general-purpose criteria given their capability to produce a diverse visual output.

While the code includes examples to help providers assess whether a model qualifies as general-purpose AI, each case must ultimately be evaluated individually.

General-purpose AI models with systemic risk

The AI Act distinguishes general-purpose AI models from general-purpose AI models with systemic risk.

According to Article 3(65) of the AI Act, systemic risk refers to the high-impact capabilities of general-purpose AI models that can significantly affect the EU market — either due to their wide reach or because of actual or foreseeable negative impacts on public health, safety, security, fundamental rights or society that can spread widely across the value chain.  

General-purpose AI models with systemic risk are also classified by floating-point operations per second. According to Article 51(2) of the AI Act, models with floating-point numbers greater than 10 to the power 25 are presumed to have high impact. The logic being that the bigger the model, the higher the impact and greater the risk.

The main reason for this distinction is to apply stricter rules to general-purpose AI models with systemic risk. All general-purpose AI models must comply with the AI Act's Article 53, which outlines core transparency, documentation and copyright obligations. Additionally, Article 54 requires providers not established in the EU to designate an authorized representative. General-purpose AI models with systematic risk are additionally subject to the enhanced requirements of Article 55, which introduces obligations for systematic risk management, including model evaluations, incident reporting and cybersecurity measures.

Defining general-purpose model providers and their extraterritorial scope

Though there are nuances, the term "provider" generally refers to the actor developing a general-purpose AI model. According to Recital 97, a general-purpose AI model is considered placed on the market when made available through libraries, application programming interfaces, direct downloads or physical copies.

The code text gives examples of providers directly putting general-purpose AI models on the market, but these models are usually used as the foundation for AI systems that are then sold or made available.

The general-purpose AI model provider and the AI system provider can be the same entity, like ChatGPT and its parent company OpenAI, or two separate actors, the general-purpose AI model provider and a downstream actor. Once the AI system becomes available in the EU, both providers fall under the AI Act regardless of their location. This means general-purpose AI model providers may not be able to control whether the AI Act applies to them or not.

Exemptions apply where the general-purpose AI model is excluded from EU distribution or is made available as open-source software. It remains to be seen if these exemptions will become relevant. Given the scale of general-purpose AI models, it is difficult to imagine a case where the AI Act would not apply.

Providers must take note of the AI Act's extraterritorial scope, which is broader than in other EU laws like the General Data Protection Regulation. Whereas the GDPR requires active targeting or monitoring of individuals, the act's extraterritorial scope is triggered if the "output produced by the AI system is used in the Union," leaving both the general-purpose AI provider and the AI system provider subject to its rules, often without their direct involvement or intent.

Authorized representatives for general-purpose AI models

Appointing a representative for providers without an establishment within the relevant jurisdiction is firmly embedded in privacy and digital governance laws — the GDPR, Switzerland's Federal Act on Data Protection, Turkey's Personal Data Protection Law, and others for example. The AI Act is no exception.

Under the AI Act, however, the authorized representative takes on additional responsibilities, similar to those in product liability law. This means responsibilities beyond serving as a local point of contact for authorities and stakeholders; the representative also plays a role in ensuring safety and regulatory compliance. A representative must confirm technical documentation has been drawn up, verify a provider meets obligations under Articles 53 and 55, and serve as gatekeeper of technical documentation for the EU AI Office.

Although general-purpose AI providers are not expressly required for to register their representative with authorities — unlike AI system providers — registration should be pursued for several compelling legal and practical reasons. There is no requirement to publicly list contact information for the representative, meaning registration is the only mechanism the AI Office or national authorities have of becoming aware of the appointment. In addition, Article 54(5) of the AI Act requires representatives to terminate their mandate and notify the AI Office if the general-purpose AI provider is non-compliant, which only makes sense if the representative is registered with the AI Office.     

Conclusion

The AI Act is a landmark regulation for technology that is shaping our present and future.

Given AI's global importance, its regulation is highly controversial; the EU must gain international support for adherence to its AI Act and defend such regulation within the global race for AI dominance.

As enforcement begins, the EU will need to strike a careful balance between regulatory ambition and practical implementation. The release of the General-Purpose AI Code of Practice — though just two weeks before the deadline — marks a necessary step toward clarity. Further refinements will be key to support both innovation and compliance in the AI ecosystem.

Andreas Mätzler, CIPP/E, CIPM, FIP, is CEO and Katharina Jokic is a privacy professional at Prighter.