As the regulatory environment develops at pace, organizations using artificial intelligence technologies must ensure they are equipped to navigate evolving requirements. One of the most significant concepts emerging in this space is AI literacy, which refers to the multidimensional competencies and skills necessary to manage and deploy AI systems responsibly and in compliance with legal requirements.

Throughout this text, the references to articles and recitals are in the context of the EU AI Act, although the relevance extends beyond entities in scope for the European regime as AI literacy is a core component of AI governance in any jurisdiction and an essential part of effective risk management in the AI domain.

Defining AI literacy under the AI Act

The inclusion of a stand-alone AI literacy requirement in the AI Act underscores its importance in legal compliance and responsible AI governance. Article 4 contains the operative provision and requires providers and deployers of AI systems to "take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used."

Recitals 20, 91 and 165 also explicitly refer to AI literacy requirements and are useful to inform the relevant requirements.

AI literacy itself is explicitly defined in Article 3(56) as "skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause."

This definition serves as the legal baseline for organizations' obligations to develop and sustain AI literacy within their workforce and among those who interact with AI systems.

The legal requirements

The AI Act positions AI literacy as an obligation that spans multiple actors: providers, deployers and affected persons.

For both providers and deployers, it is important to note the requirement applies to their own staff as well as "other persons" dealing with the operations and use of AI systems on their behalf. This means the requirements extend beyond the immediate workforce and can also apply to third parties such as contractors, service providers and other outsourced arrangements.

Crucially, affected persons, i.e., those impacted by AI-driven decisions or operations, should also be provided the knowledge necessary to understand how decisions made using AI will impact them. While easily overlooked, this obligation is closely tied to transparency, explainability and human oversight.

Key takeaway: Ensure all relevant actors in the AI value chain are correctly identified so their AI literacy needs can be evaluated.

What does AI literacy require?

Providers and deployers must have sufficient skills, knowledge and understanding to make informed deployments of AI systems. They must be aware of the opportunities and risks of AI and the possible harm it can cause.

There are two forms AI literacy obligations: a specific requirement regarding the AI systems that are, or will be, deployed by the organization and a more general requirement regarding awareness of risks, opportunities and possible harms.

Since the requirement here is output based, i.e., actually achieving the skills, knowledge and understanding, simply providing training may not be sufficient if stakeholders do not engage and learn from it. Consideration should therefore be given to the metrics used to measure the effectiveness of AI literacy training, as more traditional metrics may not be fit for purpose here.

Key takeaway: It will be important to ensure both general and specific training needs are identified and met and to define output metrics early to ensure AI literacy is effectively measured.

What factors are relevant when assessing AI Literacy requirements?

AI literacy is contextual, and any AI literacy program must therefore be tailored to the relevant circumstances of different organizations. Recital 20 contains examples of different contextual requirements that serve as a useful guide. Contextualizing AI literacy properly ensures those interacting with AI are able to make informed decisions.

Because AI literacy means many different things in different scenarios and for those tasked with implementation, consider factors to help contextualize it, such as:

  • The roles, technical knowledge, experience, education and training of relevant staff.
  • Organizationally, the specific context in which AI systems are deployed or used, the risk profile, and the AI system itself.
  • The sector, environment or industry in which the AI system is deployed or used.
  • The impact of the AI system on relevant, affected, individuals or groups.

Achieving AI literacy will also be a moving target as business models evolve in different organizations and industries. As such, any baseline assessments should include triggers for reviews, as well as reasonable periodic reviews.

Key takeaway: AI literacy requires a methodical approach to identify the relevant requirements based on the context.

What standard of AI literacy is required?

Providers and deployers are required to take measures to ensure "to their best extent" a "sufficient level" of AI literacy.

In the absence of relevant standards and codes of conduct to address AI literacy, which may be forthcoming in due course per Article 95, there will likely be a range of interpretations as to what constitutes both sufficient levels and best efforts. However, the absence of a particular benchmark does not mean organizations can wait and see what happens.

AI literacy obligations are among the first provisions of the AI Act to come into force 2 Feb. 2025. Although, in practice, organizations are likely to have some leeway as EU countries have until August 2025 to appoint the market surveillance authorities responsible for enforcing this legal requirement. Those tasked with implementing the act's requirement should therefore start the process of identifying and assessing AI literacy needs early on. This will likely involve an assessment of initial minimum requirements and a roadmap to mature the program over time.

During closed meetings in December 2024, European Commission officials reportedly told national governments voluntary guidance on AI literacy will be made available by the end of 2025 and some examples of best practices will be made available by Feb 2025. This mirrors similar statements made recently to AI Pact participants.

Key takeaway: Organizations are required to take steps to identify AI literacy needs and ensure they meet sufficient levels based on the relevant context. This assessment should be documented, periodically reviewed, and include an assessment of minimum requirements and a roadmap for maturing the program over time.

Failure to implement AI literacy

There is no direct fine regime linked to AI literacy under the AI Act.

In the absence of an explicit penalty, regulators will likely take AI literacy into account in the overall context when assessing a reasonable fine and sanction for other breaches under the act.

When deciding whether to impose an administrative fine and the amount to impose, in each individual case, Article 99(7)(g) states regard shall be given to"the degree of responsibility of the operator taking into account the technical and organisational measures implemented by it."

As AI literacy is an organizational measure, regulators will likely consider whether and to what extent the breach was encouraged by not having AI literate staff and whether it was at an organizational level or a more individual human error.

In addition, under Article 85, there is the possibility of lodging a complaint with a market surveillance authority when a natural or legal person believes the provisions of the AI Act have been infringed.

Key takeaway: Although there is no direct fine regime for failure to implement AI literacy, any such failures will likely impact the level of fines and sanctions for other incidents or breaches.

Challenges in implementing AI literacy

The above sets the groundwork for understanding the legal obligations relating to AI literacy and demonstrates that this issue is more complex and nuanced than it first appears. Implementing these obligations comes with its own challenges, and the next articles in this series will provide guidance through at least some of the more important areas to navigate, including the following.

AI as fast-evolving technology. The dynamic nature of AI technologies means literacy is not a one-time achievement but a continuous process. As AI systems and the associated regulatory landscape evolve, so too must the knowledge and skills of those who work with them.

Rightsizing the AI literacy program. Ensuring AI literacy programs are contextually appropriate can be complex. AI systems vary widely in terms of their applications, their risks and the technical expertise required to manage them, requiring careful assessment and tailoring of AI literacy initiatives accordingly.

Resource constraints. Resource constraints pose significant hurdles. While smaller organizations may struggle to find proportionate support, larger organizations will likely grapple with significant complexity. In both instances, sufficient resource allocation for comprehensive AI literacy programs may be challenging.

The path forward: AI literacy in risk management and governance

For organizations to successfully meet the AI literacy requirements of the AI Act and other emerging regulations, it is essential to integrate AI literacy into broader governance frameworks. AI literacy should not be treated as a stand-alone compliance requirement but as an ongoing component of AI risk management and good governance.

By embedding AI literacy into existing frameworks and governance structures, organizations not only ensure compliance with legal requirements but also promote a culture of responsible AI use. Moreover, proactive AI literacy programs can help future-proof organizations against the rapid pace of technological and regulatory change, and it may also serve to unlock competitive advantages as organizations become more adept at leveraging responsible innovation.

Building a comprehensive AI literacy framework requires sustained effort, but the rewards — both in terms of compliance and competitive advantage — are well worth the investment.

Erica Werneman Root, CIPP/E, CIPM, is the co-founder of Knowledge Bridge AI and a consultant via EWR Consulting.

Nils Müller is a partner at Eversheds Sutherland.

Monica Mahay, CIPP/E, CIPM, FIP, is the director of Mahay Consulting Services and chief compliance officer at Sky Showtime.