Two unofficial versions of consolidated text on the proposed EU Artificial Intelligence Act leaked online Monday, indicating progress on the major legislation continues in earnest.

Euractiv Technology Editor Luca Bertuzzi posted that given "the massive public attention to the (AI Act), I've taken the rather unprecedented decision to publish the final text." The four-column document, which amounts to a scroll-worthy 892 pages, includes the original European Commission proposal next to the European Parliament and Council's mandates. The fourth column to the right features the draft consolidated agreement with redlines. 

Shortly after Bertuzzi's post, European Parliament Senior Advisor Laura Caroli shared a consolidated 258-page document online, as well.  

News of the text was not unexpected. Last week, Bertuzzi reported the hard deadline to finalize the consolidated text was indeed Monday 22 Jan. ahead of Wednesday's Telecom Working Party meeting, "with the view of adopting it at the ambassador level (COREPER) on 2 February." 

According to Bertuzzi's reporting Monday, the text was shared with EU Council members Sunday afternoon. France continues to mull whether it can form a blocking minority over concerns it has with regulating foundation models, "with the view of getting some concessions into the text." Bertuzzi said, "the picture will become clearer when the member states provide their feedback at the technical level."

Late last year, trilogue negotiations hit a snag when France, Germany and Italy pushed to remove the proposed approach to regulating foundation models, which, at the time, prompted parliamentary officials to walk out of the negotiations. Progress resumed, however, in early December through two marathon trilogue negotiating sessions, which eventually produced a political agreement on the AI Act. 

The timeline to agree on a final text remains tight as parliamentary elections take place this June. Bertuzzi reported Monday that "national delegates will not have enough time to analyse the entire text but will have to focus on the key articles." 

Key dates

With an unofficial, consolidated text available, it is now possible to glean notable takeaways for those following along with AI Act progress. 

IAPP Research and Insights Director Joe Jones focused on key dates within the text. The AI Act would enter into force on the 20th day after publication in the EU Official Journal and enter into application 24 months after entry into force, except for specific provisions. 

There would be phases in which different portions of the law would enter into force. It has been known that prohibitions on unacceptable risk AI systems would apply 6 months after entry into force. 

However, Jones highlighted that obligations for high-risk AI systems would not be applicable until 36 months after entry into force. This extended time period appears to have flown under the radar publicly during last December's trilogue meetings. 

A year after entry into force, obligations for providers of general purpose AI models, provisions on penalties for general purpose AI systems, and the appointment of member state competent authorities will all be applicable. The European Commission implementing act on post-market monitoring and the list of elements that must be included in the monitoring will be applicable 18 months after entry into force. 

Finally, after 24 months, the AI Office must ensure that classification rules and procedures are up to date; member states must outline and notify the commission about the rules on penalties, including administrative fines and ensure they are properly implemented; and member states must ensure their competent authorities establish and make operational at least one AI regulatory sandbox at the national level. 

Governance, knowledge and training highlights

IAPP Managing Director for Europe Isabelle Roccia, CIPP/E, and European Operations Coordinator Laura Pliauskaite focused on extracts related to governance, training, qualifications and knowledge. Though there are multiple areas where these are embedded into the text, here are some high-level takeaways. 

There are several recitals in the text that include direction for AI governance and appropriate training, qualifications and skills. Recital 9(b) includes a focus on "AI literacy" so that it provides "all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement." 

Recital 48 states that high-risk AI systems should be designed so that "natural persons can oversee their functioning" and that "impacts are addressed over the system's lifecycle." Those who oversee such functioning must have "the necessary competence, training and authority to carry out that role." 

Recital 58 includes language for deployers of high-risk systems to appropriately monitor, ensure instructions for use and obligations are provided, and that records of such actions are maintained. Again, those tasked with these obligations should be appropriately trained. 

For general purpose AI systems, Recital 60(q) includes language noting that providers should "continuously assess and mitigate systemic risks, including for example putting in place risk-management policies" that include accountability processes. 

Several articles also touch on AI governance and knowledge obligations, as well, including articles 3, 4, 9, 14, 29, 33, 54, 55 and 59. At the enforcement level, Article 59(4) says that "Member states shall ensure that the national competent authority is provided with adequate technical, financial and human resources." 

This is a developing story. The IAPP will continue to follow along, share the latest news and offer up in-depth resources for understanding the EU AI Act's requirements and obligations.