AESIA's AI Guidelines: Spain steps into the AI spotlight

Spain's AI authority, the Agencia Española de Supervisión de la Inteligencia Artificial, published guidelines that helps translate the principles of the EU AI Act into practical steps organizations can take to achieve compliance.

Published:
Contributors:
Joanna Rozanska
CIPP/E, CIPP/US
Associate
Hogan Lovells
When the EU Artificial Intelligence Act entered into force, many organizations across Europe asked the same question: where do we start? While the regulation sets out an ambitious framework, it deliberately leaves room for interpretation — and it is precisely in that space between legal text and practical implementation that uncertainty emerges.
Spain has moved quickly to fill that gap. With the creation of the Agencia Española de Supervisión de la Inteligencia Artificial, Spain became the first EU member state to establish a dedicated national AI authority, placing itself at the forefront of European AI governance.
The publication of AESIA's 16 AI guidelines is a natural extension of that leadership. Developed within Spain's AI regulatory sandbox pilot, the guidelines draw on real-world testing of AI systems under regulatory supervision. These guidelines constitute one of the firsts structured set of interpretative criteria issued by a public authority in Europe. While formally nonbinding and addressed to the Spanish market, their relevance is likely to extend much further. In practice, they may influence how other regulators approach AI Act compliance.
This institutional momentum is not limited to AESIA. In parallel, Spain's data protection authority, the Agencia Española de Protección de Datos, has recently published its own guidance on agentic AI systems — those capable of pursuing goals with a high degree of autonomy — raising early warnings about opacity, manipulation and user control. Together, these initiatives reflect a coordinated national effort to translate the AI Act's principles into practical safeguards for high-impact AI.
A practical structure for AI Act compliance
AESIA's guidelines are deliberately organized as a progressive compliance roadmap, allowing organizations to engage with the AI Act at different levels of maturity. The introductory guides, Guides 1-2, provide a high-level overview of the regulation and its key concepts, serving as a natural entry point for organizations new to AI regulation. The technical guides, Guides 3-15, form the core of the package, each addressing a specific obligation applicable to high-risk AI systems, from risk management and data governance to transparency, human oversight and cybersecurity. Finally, Guide 16, together with a set of practical checklists, enables organizations to carry out structured self-assessments, identify compliance gaps and prioritize remediation efforts. Overall, the guidelines frame AI Act compliance as an ongoing operational process, rather than a one-off exercise.
What matters in practice
Rather than covering the guidelines exhaustively, it is useful to pause on areas where AESIA's guidance becomes particularly specific. These examples help illustrate how the AI Act's requirements play out in day-to-day compliance decisions.
A good starting point is transparency. AESIA treats transparency as a requirement that applies both during the design and development of high-risk AI systems and in the instructions provided to users. Providers are expected to manage system complexity by ensuring that information about the system's purpose, limitations, performance and conditions of use remains accurate and usable as the system evolves. To support this, the guide points to concrete measures such as structuring information at different levels of detail and using integrated metrics or indicators to help users interpret outputs and identify when further human review is required.
Explanations must also be adapted to the technical profile of the user: what works for a data scientist may be useless — or even misleading — for an operational user. The guidance gives use-case examples: a public official using an AI aid-allocation system should receive outputs and accuracy information in plain language to validate policy compliance, while a data engineer monitoring the system gets the same info plus technical details, and affected users such as families, patients, etc., are given understandable reasons for decisions in terms they can grasp.
Closely connected to transparency is human oversight. AESIA is explicit that high-risk AI systems must be designed so that humans can meaningfully supervise their operation, with the authority and the practical ability to intervene, correct outputs or stop the system where necessary. The guidance anchors oversight in "human in command" — human responsibility for critical decisions — and distinguishes between "human-in-the-loop" arrangements, where human intervention is integrated directly into decision-making and is generally favored in high-risk contexts, and "human-on-the-loop" models, which rely on ex post human oversight and the possibility of reversing automated decisions.
Oversight must also be usable in practice: different profiles, including specialists, technical teams and affected persons, must be able to understand and challenge system outputs at their own level of expertise. To support this, AESIA points to concrete measures such as suitable human–machine interfaces, noting that AI oversight dashboards remain embryonic and emphasizing the need to advance them, targeted and ongoing training, and even the use of forced error testing, where incorrect outputs are deliberately introduced to assess whether operators detect and correct them. In short, human oversight is treated as an operational control, not a formality, and as a core mechanism for keeping accountability with humans.
The guidance on technical robustness adds another layer of pragmatism. AESIA expects providers to define what an appropriate level of performance means in light of the system's intended purpose and risk profile, and to justify the metrics they select, such as accuracy, F1-score, AUROC or others. It warns that factors like overfitting and model or data degradation can undermine performance over time and calls for solid validation and monitoring practices across the system's life cycle. The guide also notes that design choices, including ensemble methods, can help manage prediction uncertainty and strengthen robustness. In practice, robustness-related performance must be monitored over time and corrected when agreed metrics fall below documented thresholds.
These obligations are ultimately pulled together through technical documentation. This is where many organizations hit their first real stumbling block: the long Annex IV list of technical documentation requirements can feel overwhelming, and many companies simply do not know where to start. AESIA's guidance does not shorten that list, but it does help turn it into a structured, workable documentation set instead of a daunting checklist. Providers are expected to document design decisions, development methods, testing processes and risk mitigation measures across the system's life cycle. From a practical standpoint, the guidance recommends maintaining this material in a centralized documentation system, such as an internal wiki, supported by robust version control. For smaller providers, the reference to a future simplified documentation form to be developed by the European Commission is a notable and welcome signal.
Finally, all of these obligations are brought together in the checklist manual. This Excel-based tool allows organizations to self-assess their level of compliance against the requirements covered in the technical guides, assigning both maturity and implementation difficulty to each control measure. Based on this input, the tool generates an adaptation plan that helps prioritize documentation and implementation efforts. Tested within the Spanish regulatory sandbox, this checklist framework offers a rare thing in AI regulation: a concrete, structured starting point for turning legal requirements into an actionable compliance roadmap.
A strong starting point, not the final word
AESIA's guidelines are the result of genuinely pioneering regulatory work. Drawing on Spain's AI regulatory sandbox, they translate the principles of the AI Act into practical guidance that speaks directly to business reality. In doing so, they offer an unprecedented level of legal clarity at a time when many organizations are still struggling to understand how the regulation will apply in practice.
That said, this is only the beginning. However useful these guidelines may be, the real complexity of the AI Act will emerge through application and enforcement. Edge cases, sector-specific tensions and cross-regulatory conflicts are unlikely to become fully visible until organizations begin applying the rules at scale.
For now, AESIA's guidelines offer something highly valuable: a credible early signal of how AI regulation may work in practice. For businesses willing to engage early, they provide not just a compliance tool, but a chance to shape internal governance before enforcement pressures fully materialize.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Joanna Rozanska
CIPP/E, CIPP/US
Associate
Hogan Lovells



