Rules around prohibited artificial intelligence practices and AI literacy requirements under the EU AI Act now apply as of 2 Feb., one of several deadlines set under the act.
The act's list of AI prohibited practices are governed by Article 5 and include facial recognition databases built by scraping images online or via security footage; biometrics used to determine a person's identity; manipulative techniques to influence behavior; social scoring; criminal prediction software; emotional detection technology in schools and workplace; and systems that exploit a person's age, disability or socioeconomic situation to influence behavior.
The European Commission issued guidelines aiming to outline prohibited AI practices as defined under the act. The Commission said the guidance aims to provide "legal explanations and practical examples" toward compliance practices that will help achieve a "safe and ethical AI landscape." The guidelines, which are nonbinding and subject to updates, reflect stakeholder input from a November 2024 consultation.
In addition to prohibited practices, the stakeholder consultation covered definitions of AI systems. Those nonbinding guidelines arrived 6 Feb., with the Commission noting they cover the "practical application of the legal concept" for defining systems. The focus, according to the Commission, is to "assist providers and other relevant persons in determining whether a software system constitutes an AI system to facilitate the effective application of the rules. "
The Commission also followed through on its promise to release a repository of AI literacy practices gathered from providers and deployers. Article 4 requires those who use AI within a company to have enough technical knowledge of what the AI does, an understanding of how the AI will be used and who it will affect to operate it safely.
In an analysis piece on understanding AI literacy — part of a series on AI literacy published by the IAPP that also includes assessing AI literacy needs and designing AI literacy programs — authors Erica Werneman Root, CIPP/E, CIPM, Nils Müller and Monica Mahay, CIPP/E, CIPM, FIP, suggest that it is one "of the most significant concepts emerging" in the AI governance space.
The European Commission will host a webinar on AI literacy 20 Feb., where the EU AI Office is expected to focus on Article 4 requirements, "presenting the initiatives foreseen to facilitate the implementation of this general provision."
While the application of the first provisions is notable, more impactful aspects of the AI Act will take effect in six months. On 2 Aug., member state competent authorities will be appointed and given the regulatory power to issue fines and enforce the regulation. The list of prohibited AI practices is also subject to its annual Commission review at that time. Obligations for providers of general purpose AI models will also go into effect at this time.
In addition to prohibited practices and AI literacy, the regulation also defines what counts as an AI system. It comes ahead of the highly anticipated Code of Practice for General-Purpose AI Models, currently in its second iteration with a final draft expected in April.
Any company found to be using AI in these manners in the EU faces a fine of up to 7% of their annual revenue or 35 million euros, whichever is greater. But the law makes exceptions for law enforcement-related activities, like real-time facial recognition in public spaces or as an aid in assessing a suspect's participation in a crime.
Politico reports those carveouts have raised concerns with human rights groups. The Center for Democracy and Technology Europe criticized the Commission's handling of the rules in late January, saying the public stakeholder consultation did not include a full draft of the guidelines and input was limited.
Caitlin Andrews is a staff writer for the IAPP.