Rules around prohibited artificial intelligence practices and AI literacy requirements under the EU AI Act now apply, despite no complementary guidance as of yet from the European Commission on how to do either. The first wave of AI Act provisions and restrictions went into effect 2 Feb., one of several deadlines set under the act.

The act's list of AI prohibited practices are governed by Article 5 and include facial recognition databases built by scraping images online or via security footage; biometrics used to determine a person's identity; manipulative techniques to influence behavior; social scoring; criminal prediction software; emotional detection technology in schools and workplace; and systems that exploit a person's age, disability or socioeconomic situation to influence behavior.

Stakeholders anticipated the provisions to be ushered in with additional guidance from the European Commission, but fresh guidelines were not immediately published. The Commission launched a consultation on AI Act prohibitions and AI system definitions November 2024 with the aim of releasing guidelines based on consultation responses "in early 2025." When they arrive, the guidelines will be nonbinding and subject to updates.

The Commission also promised to release a repository of AI literacy practices gathered from providers and deployers. Article 4 requires those who use AI within a company to have enough technical knowledge of what the AI does, an understanding of how the AI will be used and who it will affect to operate it safely.

In an analysis piece on understanding AI literacy — part of a series on AI literacy published by the IAPP that also includes assessing AI literacy needs and designing AI literacy programs — authors Erica Werneman Root, CIPP/E, CIPM, Nils Müller and Monica Mahay, CIPP/E, CIPM, FIP, suggest that it is one "of the most significant concepts emerging" in the AI governance space.

The European Commission will host a webinar on AI literacy 20 Feb., where the EU AI Office is expected to focus on Article 4 requirements, "presenting the initiatives foreseen to facilitate the implementation of this general provision."

While the application of the first provisions is notable, more impactful aspects of the AI Act will take effect in six months. On 2 Aug., member state competent authorities will be appointed and given the regulatory power to issue fines and enforce the regulation. The list of prohibited AI practices is also subject to its annual Commission review at that time. Obligations for providers of general purpose AI models will also go into effect at this time.

In addition to prohibited practices and AI literacy, the regulation also defines what counts as an AI system. It comes ahead of the highly anticipated Code of Practice for General-Purpose AI Models, currently in its second iteration with a final draft expected in April.

Any company found to be using AI in these manners in the EU faces a fine of up to 7% of their annual revenue or 35 million euros, whichever is greater. But the law makes exceptions for law enforcement-related activities, like real-time facial recognition in public spaces or as an aid in assessing a suspect's participation in a crime.

Politico reports those carveouts have raised concerns with human rights groups. The Center for Democracy and Technology Europe criticized the Commission's handling of the rules in late January, saying the public stakeholder consultation did not include a full draft of the guidelines and input was limited.

Caitlin Andrews is a staff writer for the IAPP.