With the drafting of a key piece of the EU Artificial Intelligence Act's enforcement mechanism slated to begin at the end of September, thousands of entities are looking to shape how that process is developed.

The AI Office has just nine months to write the code of practice, which will govern general-purpose AI across the bloc. With such a relatively short period of time, a new report commissioned by the Computer and Communications Industry Association argues drafters should be careful not to repeat the mistakes of regulators in the past.

That means making sure the code's scope is appropriate to the regulation and the AI Office remaining engaged in the drafting process. Those points are among 10 recommendations Wilson Sonsini Goodrich & Rosati's Yann Padova and Sebastian Thess outlined in the CCIA report regarding how to create a governance scheme that will have ramifications across the world.

The report provides a window into how some of the most prominent members of the industry want that drafting process to go.

"As the participation in and adherence to the (AI Act) code is voluntary, the procedure of drawing up the AIA code and its content need to be attractive to ensure the code's success," the report states. "Otherwise, there is a risk that (general-purpose AI) providers will not participate and rely on their in-house compliance solutions instead." 

Despite the code's voluntary participation, providers will have to show other means of complying with the AI Act if they do not sign it. Eschewing the code in favor of internal compliance practices means greater risk of fragmentation rather than consistent application — an issue which would likely mean legal battles down the road.

The stakes are high because of who is affected by the code. General-purpose AI systems can have a variety of uses, sometimes outside the developers' intentions. Occasionally referred to as "foundation models," these systems serve as the blueprint for other, more targeted AI products. Search engines built on ChatGPTs large language model are just one example.

Kasper Peters, the head of communications for CCIA Europe, said commissioning the report to inform the industry group's input on the code felt critical because of the time frame for when the codes would come online.

"Finalizing a code of practice on a frontier technology like GPAI in only a couple months is an unprecedented exercise," he said. If the drafting process falls short, Peters indicated "it could mean less innovation and reduced availability of cutting-edge AI technology for EU consumers and businesses."

The AI Office held an open call for interested parties to submit input on how the codes should be written, attracting nearly a thousand applicants.

The recommendations

The report breaks the recommendations down into three overarching ideas: Achieve consistency with international approaches, provide legal certainty and create a solid drafting process.

Determining recommendations for the code's drafting process began with looking at other codes of conduct, seeing how they were structured and what did and did not work with them, said Padova, who also serves as IAPP country leader for France. Such methods of governance are used with regulations, including the EU General Data Protection Regulation, and seem to be growing in popularity when it comes to technological industries.

The report argues codes of conduct can be helpful because they can be more flexible and updated as opposed to hard and fast regulation.

But how often codes of practice are viewed as being successful at their task is hard to say, according to Padova. Feedback from industry members indicated their reception is somewhat mixed.

"Some were seen as overly costly, overly prescriptive — like I think one was described as tortuous," Padova said. "So the idea is quite attractive, but how does it work in practice?"

And with a short timeframe to produce recommendations, Padova said he and Thess looked to other prescriptive documents — such as the G7 AI principles and code of conduct and the International Organization for Standardization's AI management systems — as well as frameworks in other countries for inspiration.

"You can't compare something that is very soft law, like the Biden executive order, which is really American and legally mixed, with a regulation for 27 countries," he said. "But still, we wanted to look outside the EU and not behave like an island."

To get AI providers to buy into the code, the report suggests drafting in such a way that promotes built-in incentives, such as legal certainty or knowing the code is interoperable with other areas — a "key requisite" for EU companies to compete on the global market.

That legal certainty also means sticking within the AI Act's boundaries.

The report suggests code drafters should not look at issues such as copyright, saying how those laws interact with the training of AI models are complex and legal. And drafters should avoid getting too bogged down in the weeds to avoid creating a document which cannot adapt as technologies do.

"Sometimes you go a bit beyond what is within the law in order to make it clearer. But with making it clear, you will expand the scope and introduce new requirements that are not really provided by the law or the regulation," Padova said.

Staying within the scope of the AI Act's plan for the code of practice also means honoring the perspectives of the most impacted stakeholders. While Padova said there is a balancing act between civil society and industry — giving AI providers the reins to write the codes has been a subject of criticism — the report notes those providers were specifically included in the act as being able to participate in the drafting process, while others can only support.

Still, the final recommendation in the report also notes one provider alone should not hold greater sway over the process. The AI Act code "should not endorse or refer to specific commercial solutions promoted by internal or external stakeholders," according to the report.

Instead, drafters should "be focused on finding workable solutions on the basis of the GPAI provider’s experience and in alignment with the AI Office."

Caitlin Andrews is a staff writer covering AI for the IAPP.