As artificial intelligence models and systems grow increasingly powerful, the urgency to establish clear and workable regulations for their responsible use becomes more evident. The European Union's AI Act, introduced in July 2024, represents a significant step forward, aiming to provide a robust framework for AI governance.

Central to the AI Act's implementation is the Code of Practice, a landmark initiative to translate the AI Act's high-level obligations for general-purpose AI model providers into concrete and operationally actionable requirements. It is designed as a co-regulatory tool where EU regulators, such as the EU AI Office, general-purpose AI providers and other stakeholders work together collaboratively to draft a set of requirements providers can rely on to demonstrate compliance with AI Act obligations. However, the timeline for publishing a final code is very ambitious as it needs to be finalized by 2 May 2025.

With such a tight timeline for its completion, the process for designing the code becomes as important as its content. In a detailed August 2024 study, the authors shed light on the relevant factors to consider to ensure the code's success, including 10 recommendations to that end.

The AI Office, responsible for facilitating the drawing-up of the code, has since made substantial progress in kicking-off the drafting process, recently publishing the first draft of the General-Purpose AI Code of Practice. The draft will now be discussed between the relevant stakeholders and chairs will present the key takeaways from the discussion 22 Nov. This provides a great opportunity to reflect the recent developments and provide some insights as per the emerging tendencies and challenges we are already identifying.

What has happened so far?

The AI Office kicked off the process at the end of July 2024 by launching a call for expression of interest to participate in the code's drafting and a consultation providing stakeholders the opportunity to sound off on its topics.

The office also announced the drafting procedure. Following a September kick-off plenary meeting, participants are engaging in three plenary sessions with four working groups focused on specific wide-ranging topics — from transparency and copyright to risk assessment and management — to draft the code, which will be presented during a closing plenary session in March or April.

A challenging balance of stakeholders' participation

The AI Office announced around 1,000 stakeholders will participate in the process of drafting the code. The wide range of participants, the complexity of the subject matter and the short timeframe are all unprecedented.

For the code to provide legal certainty and work in practice, it needs to be grounded in technical realities and reflect the technical state of AI. Reaching this delicate, yet essential, balance with so many different views from so many stakeholders will be a challenge, especially considering the ambitious timeline, leaving the AI Office with only six months to finalize the code.

The code is in essence a legal instrument, and not a political vehicle. Falsely equating all views expressed in this process, especially those not grounded in technical realities, could considerably undermine its success.

In addition, the composition of participating stakeholders raises concerns. The AI Act clearly foresees a different level of participation of the general-purpose AI providers on the one hand and other stakeholders on the other. While general-purpose AI providers are to be invited by the AI Office to participate in the drawing-up of the code, other stakeholders may only support the process.

Based, however, on the current composition of stakeholders, general-purpose AI providers only represent around 4% of all stakeholders and have only one designated group to engage with the chairs of the four working groups.

Against this backdrop it is important to remember that general-purpose AI providers are the addresses of the Code and should, therefore, be afforded a more prominent role in the drafting process as foreseen by the AI Act. After all, they have the expertise and know how this constantly evolving technology works. If they don't become an integral part of the process, or if other stakeholders try to hijack the process, the code is in jeopardy of getting off topic and may not work in practice.

Governance could lead to insufficient consideration of providers' practical experience and expertise

The composition of the chairs and vice-chairs of the four working groups raises some questions as to the correct balance and diversity. For example, when taking a closer look, it is apparent the composition is academically skewed.

This gives rise to some concerns in terms of a sufficient reflection of practical experience and viewpoints. The code is intended to provide guidance and legal certainty not only for regulated entities, but also for regulators. It will be important to ensure the academic skew does not adversely impact practical considerations, also in terms of enforcement and engagement with code signatories.

It's important these concerns are addressed and mitigated in the drafting process by ensuring contributions with a more practical background are given particular consideration.

A risk of material extension that goes beyond the code's core issues

The code aims to operationalize numerous AI Act obligations, spanning from transparency requirements over internal compliance policies to model evaluations and risk assessments. These obligations are important, but extremely vague at the same time.

It will be challenging for the AI Office, the chairs and vice-chairs of the working groups, and all relevant stakeholders to ensure that these important and complex obligations are translated into operational rules in the code. However, all participants need to embrace this challenge and channel their efforts into ensuring the code sticks to this clear focus to ensure its success.

Recent events, however, give rise to concerns that the code might wander off topic and toward issues that go beyond the scope of operationalizing the AI Act's requirements for general-purpose AI providers. For example, the AI Office's consultation questionnaire provided participants with the opportunity to share their views and findings for drafting the code. In the questionnaire, issues or specific requirements were raised going beyond the code's core focus. Noteworthy examples include requirements related to mandatory know-your-customer practices or third-party audits. By the same token, there is some development towards trying to introduce and negotiate highly complex legal copyright questions in the context of the code.

The AI Office should remain in the driver's seat

The AI Act makes it very clear that the AI Office should be the primary actor responsible for facilitating the code's drafting process. Based on this explicit mandate under the AI Act, the AI Office needs to remain in the driver's seat during the drafting process. As a result, outsourcing any critical functions to external third parties should be met with skepticism. It is for that reason that the announcement to involve external consultancy firms in the drafting process raises concerns.

While it is not a problem if consultants provide only administrative support, it would be concerning if they were given any responsibility for facilitating core components of the code or content-related tasks. For example, it would be problematic if outside consultancy firms were given the responsibility to pre-screen stakeholder input and draw conclusions.

Conclusion

These possibilities are concerning since it is worth noting that, under the AI Act, the adherence to the code is not mandatory for general-purpose AI providers. Therefore, should the final version of the code lack operational traction, it could remain a piece of paper with very few general-purpose AI providers relying on it — a missed opportunity, in a nutshell.

Yann Padova is IAPP Country Leader, France, and partner at Wilson Sonsini Goodrich & Rosati.

Sebastian Thess is an associate at Wilson Sonsini Goodrich & Rosati.