The European Commission has published the final version of a code of practice meant to govern how general-purpose artificial intelligence models operate after months of negotiations and three weeks before a legal deadline takes effect.
The General-Purpose AI Code of Practice is voluntary for organizations looking to demonstrate compliance with the EU AI Act. Organizations that do agree to follow them will see a "reduced administrative burden" and more legal certainty compared to those that choose to show compliance in other ways, the Commission said.
That promise hints at the ongoing discussion around the bloc's digital rules, which have been hammered by companies as being too burdensome and may be subject to revision in an omnibus package. The code was supposed to come out in May, months before the act's rules on GPAI are scheduled to start in August. The Commission still needs to publish guidance on which AI providers are considered general-purpose. Technology businesses and those who use their products have pressed the Commission to delay the enforcement of the act, something the executive arm of the European Union said is not going to happen at this time.
It now remains to be seen whether the Commission and member states will endorse the code and if companies, skeptical or not, will sign onto it. Commission Executive Vice-Present for Tech Sovereignty, Security and Democracy Henna Virkkunen urged organizations to do so, saying the code will make AI safe and transparent while allowing for innovation.
"Co-designed by AI stakeholders, the Code is aligned with their needs," she said in a press release. "Therefore, I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU's AI Act."
Safety, transparency and copyright
The code is broken into three parts focused on safety and security, transparency obligations and copyright.
Signatories would be expected to understand systemic risks associated with some GPAI models and take steps to monitor and mitigate them. Estimates of timelines for when models might exceed highest risk tiers, descriptions of systemic risk acceptance criteria and security measures taken to address them as well as a plan to address a mitigation framework would have to be maintained and available.
"The Signatories further recognise that given the rapid pace of AI development, purposive interpretation focused on systemic risk assessment and mitigation is particularly important to ensure this Chapter remains effective, relevant, and future-proof," the chapter states.
Signatories can comply with the act's transparency requirements by maintaining information on a model documentation form. Information intended to help the AI Office or a national competent authority would only need to be made available following a request from those bodies, which would need to state the legal basis of the request. Open-source models would not need to keep model documentation unless it falls under the systemic risk category.
Signing onto the code would also mean committing to documenting how an AI model is following EU copyright laws and assigning people within an entity to ensure that policy is followed. Adhering to the code would mean promising to not circumvent measures "that are designed to prevent or restrict unauthorised acts in respect of works and other protected subject matter, in particular by respecting any technological denial or restriction of access imposed by subscription models or paywalls."
Web crawlers would need to exclude websites with reputations for publishing copyrighted works without permission, which the EU will maintain a list of.
A tale of a 1,001 stakeholders
Publishing the code was a feat involving more than 1,000 stakeholders and several drafts, which began in September 2024. The process was dogged with complaints on both sides of the regulatory aisle: Industry groups largely said the code continued to be too restrictive, while civil society and safety advocates argued the drafters were too deferential to technology companies' demands.
Drafters of the act have pressured the Commission to not allow the code to weaken the act's overall intent. Italian Member of European Parliament Brando Benifei and a co-rapporteur of the act's trilogue process, said in a statement provided to the IAPP that they had managed to secure important provisions on fundamental rights and copyright protections in the final draft.
"Model providers obtained important concessions, so there's no excuse not to uphold the Code," Benifei said. "The credibility of Europe's AI framework now depends on the AI Office's ability to translate these commitments into practice with robust oversight, real consequences for non-compliance, and ongoing dialogue with civil society."
But a group of MEPs, including Benifei, Ireland's Michael McNamara, Germany's Axel Voss and Sergey Lagodinsky and the Netherlands' Kim Van Sparrentak alleged the Commission had allowed last-minute removals of elements around public transparency and weaker risk assessment and mitigation provisions had made it into the final product.
In a letter posted on LinkedIn, Voss said the group continues to feel the code narrows the scope of the adopted act and creates legal uncertainty.
"How does the Commission consider the objectives of the AI Act and due process to be safeguarded if the European Parliament was not consulted on such significant changes on the final draft, while most providers reportedly received the full text of the final draft?" the lawmakers wrote.
Meanwhile, the Computer & Communications Industry Association Europe Senior Policy Manager Boniface de Champris said the final version of the code remained overly prescriptive and beyond the act's scope, putting any signatory that chooses to agree to them at a higher regulatory burden. While safety and security measures have been streamlined, he said provisions around handling copyright complaints and opt-out mechanisms got worse.
"With so little time left, companies are left in the dark, and it's clear more time is needed to finalise the overall framework and give companies a fair compliance window," he said in a press release.
Marco Leto Barone, a policy director with the Information Technology Industry Council, said in a statement companies will have to decide if the code is workable. The group will be watching how the Commission and member states assess how closely the code aligns with the act's requirements.
"Clear and well-scoped Commission guidelines are now essential to fully understand the scope of the measures. Guidelines must also grant sufficient time for implementation and compliance, given the imminent entry into application of the AI Act rules for AI models," he said.
Next up, member states and the Commission will need to officially endorse the code. Once that is completed, providers of GPAI can voluntarily sign on to the code and adhere to its requirements.
Caitlin Andrews is a staff writer for the IAPP.