A stand-alone artificial intelligence literacy program can help employees understand key issues related to AI systems, but for long-term adoption, impact and efficiency, it must be woven into the broader fabric of an organization's operational structure.
The case for integration
As the first two articles of this series highlight, AI risks are inherently tied to specific scenarios and contexts, rather than just the technology itself. As a result, there will always be a need for continuous monitoring to evaluate how AI tools evolve over time and with use. This should be a distributed and continuous effort that, at a fundamental level, relies on stakeholders having sufficient knowledge to understand and spot issues at inception and over time, rather than only relying on periodic central assessments.
This means AI literacy cannot be delegated to a single team or control point; it must be embedded throughout the operational fabric of an organization.
By embedding AI literacy throughout operational processes and decision points, organizations can enable informed risk-based decisions at appropriate levels in a way that balances innovation with responsible governance. This integrated approach allows organizations to move quickly while ensuring employees across technical, legal and operational domains can effectively identify and manage AI risks in their specific contexts. Finally, by leveraging established processes and mechanisms, organizations can avoid duplication of efforts, reduce operational friction and ensure AI literacy becomes a natural extension of existing processes.
Step 1: Identify overlapping compliance areas
Organizations should begin by considering where AI literacy naturally intersects with existing compliance areas. Some typical examples are highlighted below.
Data protection
AI systems often process personal data, making AI literacy closely related to data protection obligations. For example, employees responsible for conducting data protection impact assessments should have sufficient understanding of AI-related risks such as automated decision-making, algorithmic bias, data minimization and transparency requirements. DPIAs and AI impact assessments are also good sources for identifying new training needs, as they frequently act as a mitigation to identified risks.
Information security and cyber security
Traditional security frameworks may need to assess AI-specific vulnerabilities and risks, making AI literacy essential for security professionals. For example, teams conducting security assessments should understand AI-specific threats like model poisoning, prompt injection and training data extraction. Security incident logs can serve as triggers to identify literacy gaps, while security awareness training programs can be extended to cover AI related risks.
Risk management
Risk management frameworks offer perhaps the most comprehensive integration opportunity. Organizations can incorporate AI risk categories into their enterprise risk frameworks and include AI literacy metrics in their monitoring processes. Risk treatment plans can drive AI awareness initiatives, while risk assessments help identify priority areas for AI training.
Ethics and compliance
Ethics and compliance frameworks round out the core integration points. Organizations will need to integrate AI risk considerations into corporate policies and supplier due diligence processes. These upgrades should be accompanied through training to ensure stakeholders have the foundational knowledge to implement the new requirements. Codes of conduct training can be enhanced to include AI literacy. Organizations may also extend existing ethics and compliance governance structures to account for AI specific considerations. These may act as valuable sources of AI training in which participants can act as champions for the respective teams.
ESG
The social element of environmental, social and governance programs tends to look at human rights issues, often including ethical impact assessments. AI literacy can be incorporated within this area, particularly in relation to how AI and ethics intersect and how this can be integrated into the wider company position on these issues.
There are many other organizational frameworks that can be leveraged to incorporate or trigger AI literacy considerations. Depending on the organization, these may include human resources and employment teams to recognize AI literacy-related efforts or develop bespoke training programs. It may also include technical teams like the data science and development teams to integrate technical requirements.
By identifying these overlaps, organizations can develop a more coordinated approach to AI literacy by leveraging existing workflows to embed AI knowledge across the organization.
Step 2: Implementation approach
After mapping the areas where AI literacy intersects with existing compliance efforts, consider opportunities to meet the identified AI literacy needs using existing playbooks, policies, procedures, forums and training material.
Let's consider a practical example of how a legal department might approach AI literacy integration across their existing frameworks. Through assessment, the team identifies several key AI literacy training needs, including understanding AI-specific IP issues and licensing requirements.
The team takes a multifaceted approach to addressing these training needs. First, they procure foundational training to help the team understand the basics of AI systems, how models are trained, what this means for AI vendor agreements, AI system outputs and liability implications.
Rather than treating this as a one-off training need, the team then takes an integrated approach that weaves these learnings into their existing tools and processes.
They begin by updating their contract playbooks and templates, incorporating AI-specific negotiation guidance and new template clauses for AI licensing, data rights, and liability. Practical examples of negotiation positions for common AI scenarios are added, and existing contract review processes are enhanced with AI-specific decision trees.
The team then incorporates AI modules into their regular legal team training sessions and includes AI scenarios in existing contract drafting workshops. They develop practical exercises using real AI contract examples and create quick reference guides for common AI legal issues, all integrated into their existing contract-management framework.
These resources are hosted and shared via existing channels, maintaining consistency with how the team already manages legal knowledge.
Next, the team may start to leverage existing cross-functional collaborations by adding AI topics to regular meetings with technical teams and including AI updates in existing legal newsletters. They create dedicated AI discussion slots in regular team meetings and use established project review processes to discuss AI implications, ensuring AI literacy becomes part of regular business rhythms and forms part of their interaction with the wider company.
This integrated approach ensures AI literacy becomes part of the legal team's daily workflows rather than existing as a separate initiative. When a lawyer reviews a new AI vendor agreement, they can follow the updated playbook guidance, use new template clauses as starting points, reference quick guides for specific issues, consult documented precedents and leverage cross-functional expertise through established channels.
All these efforts help by building on familiar tools and processes while systematically embedding AI literacy across relevant topics. The end result is the team creates a sustainable approach to developing and maintaining AI competency that they can then build on as the legal environment around AI develops.
Step 3: Starting, evolving and monitoring integration
Different organizations will work with different levels of maturity and operational AI literacy. Adopting the structured process set out in this series will allow practitioners to identify and address foundational learning needs and, as the level of understanding increases across the organization, the integration steps develop this knowledge from theoretical to practical.
As the integration portion of the program starts to develop, it too will benefit from regular review cycles to assess how knowledge is effectively integrated. Incidents should be seen as learning opportunities and should inform updates to the program. Similarly, any significant new AI use cases should prompt reflection to see if additional integration efforts are needed. The program must also evolve with changing circumstances. This includes staying current with regulatory changes, updating for new AI technologies, adapting to organizational priorities and, ultimately, refining based on performance data.
The next article in this series looks at how to mature an AI literacy program, and it includes additional considerations relating to integration. Integration is not a one-time effort but requires ongoing updates and maintenance. No one expects all these points will be covered overnight. It is a continuous improvement process that takes time to develop as learnings become embedded and operationalized across the organization.
Conclusion
Successful integration of AI literacy into existing operational frameworks requires careful planning and coordination but offers significant benefits in terms of efficiency and effectiveness. By leveraging established processes and infrastructure, organizations can build sustainable programs that evolve with their needs. Importantly, the integration also translates the understanding of AI considerations from theoretical to practical.
The key to success lies in viewing AI literacy not as an additional burden but as an opportunity to build AI competency and competitive advantages. Through thoughtful integration, organizations can enhance their overall compliance postures while meeting the specific demands of AI regulation and AI governance.
Erica Werneman Root, CIPP/E, CIPM, is the co-founder of Knowledge Bridge AI and a consultant via EWR Consulting.
Monica Mahay, CIPP/E, CIPM, FIP, is the director of Mahay Consulting Services and chief compliance officer at Sky Showtime.
This is the fourth article in a five-part series on AI literacy. The first, "Understanding AI Literacy," explores the core legal requirements under the EU AI Act. The second article, "Assessing AI Literacy Needs," outlines a practical framework for assessing the AI literacy needs of an organization. The third, "Designing an AI Literacy Program," describes the process of designing a comprehensive AI literacy program that fits the organizational context.
For more information on AI, visit the IAPP AI topic page.