Resource Center / Infographics / EU AI Act cheat sheet

EU AI Act cheat sheet

This resource shows some of the most important features and requirements in the draft Artificial Intelligence Act.


Last updated: December 2023


Contributor:


View Infographic (PDF)

On 8 Dec., representatives from the Council of the European Union, the European Parliament, and the European Commission reached a political agreement on the draft Artificial Intelligence Act. The AI Act was proposed by the Commission in 2021. Since then the EU law-making institutions have been negotiating the terms of the proposed law. The political agreement is now subject to formal approval by the European Parliament and the Council, and will enter into force 20 days after publication in the Official Journal. The AI Act will then become applicable two years after its entry into force, except for some specific provisions: prohibitions will apply after six months, while the rules on general purpose AI will apply after 12 months.

Like the General Data Protection Regulation for privacy and data protection, the AI Act will be hugely important and consequential to the governance of AI in the EU and across the world. While the precise details and text of the AI Act are yet to be finalized, organizations developing and deploying AI will need to understand their new obligations ahead of time. The work for AI governance professionals begins now. This resource shows some of the most important features and requirements in the draft AI Act based on official publications and public reporting. The IAPP additionally hosts an "EU AI Act" topic page, which regularly updates with the latest news and resources.

  • expand_more

EU AI Act cheat sheet

The basics

  • Definition of AI: Aligned to the recently updated OECD definition.
  • Extraterritorial: Applies to organisations outside the EU CAID.
  • Exemptions: National security, military and defence; R&D; open source (partial).
  • Compliance grace periods: Between 6-24 months.
  • Risk-based: Prohibited AI ➜ High-Risk AI ➜ Limited Risk AI ➜ Minimal Risk AI.
  • Extensive requirements: For "providers" and "users" of high-risk AI.
  • Generative AI: Specific transparency and disclosure requirements.

Prohibited AI

  • Social credit scoring systems
  • Emotion recognition systems at work and in education
  • AI used to exploit people's vulnerabilities (e.g., age, disability)
  • Behavioral manipulation and circumvention of free will.
  • Untargeted scraping of facial images for facial recognition.
  • Biometric categorization systems using sensitive characteristics.
  • Specific predictive policing applications.
  • Law enforcement use of real-time biometric identification in public (apart from in limited, pre-authorized situations.)

High-risk AI

  • Medical devices
  • Vehicles
  • Recruitment, HR and worker management
  • Education and vocational training
  • Influencing elections and voters
  • Access to services (e.g., insurance, banking, credit, benefits, etc.
  • Critical infrastructure management (e.g., water, gas, electricity, etc.)
  • Emotion recognition systems
  • Biometric identification
  • Law enforcement, border control, migration and asylum
  • Administration of justice
  • Specific products and/or safety components of specific products.

Key requirements of high-risk AI

  • Fundamental rights impact assessment and conformity assessment.
  • Registration in public EU database for high-risk AI systems.
  • Implement risk management and quality management system.
  • Data governance (e.g., bias mitigation, representative training data, etc.)
  • Transparency (e.g., instructions for use, technical documentation, etc.)
  • Human oversight (e.g., explainability, auditable logs, human-in-the-loop, etc.)
  • Accuracy, robustness and cybersecurity (e.g., testing and monitoring)

General purpose AI

  • Distinct requirements for General Purpose AI (GPAI) and Foundation Models.
  • Transparency for all GPAI (e.g., technical documentation, training data summaries, copyright and IP safeguards, etc.)
  • Additional requirements for high-impact models with systemic risk: model evaluations, risk assessments, adversarial testing, incident reporting, etc.
  • Generative AI: Individuals must be informed when interacting with AI (e.g., chatbots); AI content must be labeled and detectable (e.g., deepfakes)

Penalties and enforcement

  • Up to 7% of global annual turnover or €35 for prohibited AI violations.
  • Up to 3% of global annual turnover or €15 for most other violations.
  • Up to 1.5% of global annual turnover or €7.5 for supplying incorrect info.
  • Caps on fines for SMEs and startups.
  • European "AI Office" and "AI Board" established centrally at the EU level.
  • Market surveillance authorities in EU countries to enforce the AI Act.
  • Any individual can make complaints about non-compliance.