As AI becomes an integral part of business operations, organizations must understand their legal obligations regarding AI literacy and translate these requirements into practical processes. For some, off-the-shelf solutions that offer introductory training and a basic overview of the risk landscape may be sufficient. For others, more thorough assessments and tailored content is likely to be required.
Why assess AI literacy?
It is important to identify and understand your audience for AI literacy. The AI value chain often includes multiple actors and entities that can act in one or more capacities, with the latter having more nuanced training needs.
Understanding who you are responsible for training is frequently more difficult than it first appears given the complex nature of corporate environments. Don't forget to consider scenarios where subcontractors, partners or outsourced service providers may be deemed to be dealing with the operation or use of AI system "on your behalf," see the requirement in EU AI Act Article 4.
Similarly, while providers and deployers are directly caught by the AI literacy obligation, there is no provision or guidance to address the handover point between the two, so organizations will need to consider where their obligations begin and end carefully. It is, therefore, helpful to adopt a structured approach to provide consistency and rigor across the assessment process.
Adopting a systematic approach to AI literacy assessments
Assessing AI literacy needs is fundamentally similar to building other learning and development programs. This process involves identifying needs, tailoring content to the roles and responsibilities of each role or group, and establishing benchmarks for measurable performance.
Below is a suggested framework for assessing AI literacy needs. However, it is important to note there are different ways these assessments can be structured depending on how pervasive the use of AI is across the organization and its risk profiles.
AI literacy is not a one-size-fits-all proposition, but practitioners can tailor their approach to suit the scenario by understanding the core requirements.
Step 1: Provider or deployer?
The first step is to understand whether the organization is a provider, deployer or both, and if activities involving the operation or use of AI systems may be outsourced to third parties acting on behalf of the organization or, indeed, where it may be acting on behalf of another.
Corporate structures are often messy and complicated, as exemplified by the challenges many faced when trying to conduct EU General Data Protection Regulation-related data mapping, data flow diagrams and records of processing activities. Conducting proper reviews and due diligence to carve out who is responsible for what when it comes to AI system development or use will be a foundational exercise not just for AI literacy but for AI governance as a whole.
Key takeaway: Don't underestimate the time required for this due diligence exercise; it may also involve aligning with third parties so everyone is clear on their roles and responsibilities relating to AI literacy.
Step 2: Role identification
Next, the relevant roles within the organization should be identified, considering various levels and types of interaction with AI systems. This ensures AI literacy efforts are tailored to each position's specific responsibilities, promoting effective and compliant AI use throughout the organization.
Catalog all roles that interact with or are affected by AI systems, paying particular attention to:
- Executive leadership: Leadership is responsible for strategic decisions, AI governance and risk appetite.
- Technical staff: This includes developers and data scientists who build, maintain and monitor AI systems.
- Operational staff: In these operational roles, employees may use AI tools in their daily tasks, such as customer service or human resources teams.
- Legal, compliance, risk and procurement: Scope of involvement in these roles will depend on how the other roles engage with AI systems, but both deep and broad expertise may be required.
- All staff: The entire workforce may have access to low-risk AI tools/systems, e.g. calendar management, note-taker, etc.
- Consider other people using AI on behalf of the organization: The AI Act's requirements apply to direct employees and those dealing with the operations or use of AI systems on behalf of an organization. This should be made easier by the due diligence completed in step one above.
By mapping these roles early in the assessment process, organizations can develop AI literacy programs tailored to the varied needs and responsibilities of different roles and functions across the organization.
Key takeaway: Understand which roles engage with AI systems. If the organization has only limited AI in use, it might be quicker and easier to work backward from the list of AI systems to determine training needs, rather than reviewing roles for the whole company.
Step 3: Risk-based tier assignment
After identifying roles, assign each to an appropriate AI literacy training tier based on its interaction level, decision-making authority and potential impact on stakeholders. This risk-based approach ensures AI literacy training is proportionate to each role's influence over AI systems and the level of understanding necessary to manage associated risks.
Tier 1 or general. This tier is for roles with limited decision-making authority and interaction with AI systems. It is focused on providing a basic understanding of AI technologies and concepts, ethical considerations, and general awareness of AI-related risks, harms and opportunities.
Tier 2 or operational. This tier is for roles with moderate interaction with AI systems or moderate impact on stakeholders and builds on Tier 1 training. It is focused on delivering an operational understanding of AI systems, practical knowledge of their applications, and more knowledge of regulatory, compliance and ethical considerations.
Tier 3 or technical. This tier is for roles with significant decision-making authority, high interaction with AI systems or substantial impact on stakeholders assigned to technical training, which may incorporate first and second tier training. It is focused on providing advanced technical knowledge, a comprehensive understanding of regulatory requirements, and skills to manage AI systems in higher-risk contexts, covering more advanced ethical and stakeholder impacts.
Tier 4 or strategic. This tier is for roles with significant decision-making authority but limited direct involvement in the AI system or tool. These roles are likely to be executive or senior management or board members who have direct accountability. This tier is focused on clearly understanding risk and risk mitigation, as well as wider stakeholder, customer or industry impacts.
By aligning AI literacy training tiers with role-specific risks, organizations ensure each team member receives the necessary knowledge to fulfil their responsibilities safely and effectively, supporting AI governance and compliance across all levels.
Step 4: Contextualizing AI literacy needs
AI literacy must be adapted to the unique context of each organization, industry and regulatory environment. It is also important for both general and specific training needs to be met.
This step assumes some general training content is used, for example to address general legal requirements, ethics, risks, harms and opportunities, and lists different considerations that could inform refinements to ensure training programs are relevant to both the AI systems in use and the specific requirements of each role. This customization could include:
- Senior management and board members: These individuals need to understand how AI aligns with the organization's strategic objectives and the potential liabilities associated with AI use. These teams are responsible for setting the organization's risk appetite and require a good understanding of AI governance, risk management and legal compliance tailored to the relevant industry, country and organizations' corporate maturity levels.
- Technical teams: Sufficient technical training is essential, and teams must be equipped to identify and manage risks throughout the AI life cycle.
- Compliance and legal teams: Training should center on AI-specific requirements under relevant laws and governance/compliance frameworks; some technical training is likely required to support product teams.
- Operational users: For those who use AI systems in their daily tasks, focus training on how to use AI tools safely and effectively, interpret AI-generated outputs, and identify and report issues with AI systems. The details will depend on their function, such as HR, marketing, sales or customer service, and the risk profile of their role.
- Other employees: All employees should understand the organization’s policies and approach to AI and how these requirements relate to their function and role.
- Suppliers: Suppliers should provide, or be provided with, a sufficient level of AI literacy for the product/service they provide.
By developing tailored training that aligns with the specific responsibilities and risk profiles of different roles, organizations can ensure each team member gains the skills and knowledge required to interact with AI systems effectively and responsibly.
Step 5: monitoring and re-assessments
AI literacy is not static. As AI technologies evolve, so will the skills and knowledge required to manage them effectively. AI literacy assessments should be ongoing processes that could involve feedback loops, periodic training updates, refresher courses and more formal reassessment processes such as annual reviews or audits. It may also be prudent to define trigger points that would necessitate reassessing AI literacy needs, such as AI related incidents or material AI system changes.
Step 6: Leverage external expertise
For many organizations, particularly those with limited in-house AI expertise, external support may be useful to assess AI literacy needs effectively. This could involve partnering with AI governance consultants, legal advisors or technical experts who can provide in-depth training on specific topics. Understanding what is available, how your audiences prefer to receive training, the level of personalization required, etc. is crucial for getting the best out of any external expertise in this domain.
Conclusion
Assessing AI literacy needs is a critical step in ensuring compliance with the AI Act and fostering a culture of responsible AI use in any jurisdiction. By understanding the context in which they operate and mapping roles and functions to appropriate training tiers, organizations can begin to take a structured approach to evaluating current training needs and tailoring AI literacy programs to the specific requirements.
Properly understanding the AI literacy needs of the organization not only supports good governance and risk management but also ensures AI literacy efforts are properly aligned with the operational realities of the business and focused on the areas that really matter.
Starting sooner rather than later is the key to success here.
Erica Werneman Root, CIPP/E, CIPM, is the co-founder of Knowledge Bridge AI and a consultant via EWR Consulting.
Monica Mahay, CIPP/E, CIPM, FIP, is the director of Mahay Consulting Services and chief compliance officer at Sky Showtime.