Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Canada, like other nations, is grappling with its approach to improve its artificial intelligence position domestically and globally. This includes understanding the opportunities that AI presents to improve the development and delivery of services for Canadians, but also to determine how to increase the adoption of AI for the country's economic benefit.  

The Canadian Federal budget was released 4 Nov., which included commitments to "invest in AI projects to further the technology's adoption at home, and catalyse private sector investment in Canada's most innovative startups." This direction appears to bolster AI and Digital Innovation Minister Evan Solomon's recent AI messaging: "sovereign not solitude."  

This new funding commitment comes on the heels of a public consultation to support the development of a new AI strategy for Canada. The strategy is expected to be released at the end of the year. However, the consultation questions provided useful insight into the current thoughts of this administration. 

AI strategy in absence of rules

With questions ranging from how Canada can invest in research and talent, to how best to accelerate the adoption of AI, improve commercialization, and enable the right infrastructure, it was clear that the government understands there are many facets to developing a sustainable and comprehensive strategy. Most importantly for me, it was important to see a series of questions on the role of education and skills. 

While I continue to believe we will be in an AI deregulatory environment for the foreseeable future, this consultation was a good reminder that there are other mechanisms that governments have beyond policy where they can influence how AI is implemented. While these may be more narrow measures, they can still have a significant impact. 

For example, Singapore is investing in AI education and upskilling. A newly announced program provides access to AI training for public servants. They have identified that an AI-fluent workforce will allow Singapore to maximize the use of AI and create safer jobs for government and across society. 

The importance of AI governance professionalization

The IAPP participated in the Canada AI Strategy consultation to highlight the importance of professionalization for AI governance and the benefits a professional community can bring. We have seen that these communities are especially important when new markets and applications are being created. 

Given Canada's emphasis on trust being a central tenant to accelerating the adoption of AI, we noted that it's important to ensure that when developing AI talent to think about cutting edge research capabilities for these tools and determine how these capabilities are going to be deployed in a trusted manner.  

As our CEO J. Trevor Hughes, CIPP, has shared in the past, a great way to build trust in AI and improve AI adoption is to have governments and other organizations appoint a knowledgeable individual to ensure AI development, deployment and use is governed responsibly. Additionally, through training and education programs, it is worthwhile for all employees involved across the AI life cycle to have a level of AI literacy aligned with their roles and responsibilities. 

As lawmakers and society coalesce around the guardrails that are needed to mitigate the risks of AI, there is an imperative to train and empower the individuals who will put those guardrails into practice. Tomorrow's regulations will rely on the professional structures we build today. Waiting until new laws are on the books or in effect to build an AI governance workforce will inexcusably delay the implementation of AI governance protections that are needed now. 

Awareness does not lead to confidence

In preparation for developing our responses, I reviewed several AI sentiment surveys, and we explored the connection between lack of trust and AI adoption. We shared that several researchers in British Columbia found that while those in B.C. are reasonably knowledgeable about AI, awareness did not lead to confidence. 

The study shows 80% of respondents were nervous about AI. Interestingly, their concerns were not about catastrophic AI risks, but the fundamental change AI could cause when woven through every industry and aspect of society. Specific issues included AI's ability to replace humans, the loss of personal agency, privacy, accuracy, the sense of disconnection in society, and bias against certain populations. Additionally, high-profile risks like deepfakes were of significant concern.  

It's useful to note that respondents to this survey, aligned with similar global surveys, were excited about how AI could push the boundaries of what humans can achieve. They also expressed excitement about how quickly machines can learn and assist with tasks, improving efficiency, and helping with critical issues like medical breakthroughs and solving capacity issues in some industries. 

While the survey specifically mentions legislation, trust in AI surveys tend to point to the need for a combination of hard or soft rules with accountability mechanisms for maximum AI adoption. Regardless of whether these accountability mechanisms come in the form of hard law or softer guidance, knowledgeable professionals inside organizations will be required to implement them.

Guidance is good, too

Building on the idea of hard and soft rules, I shared that this is a common theme in our AI governance reporting. In our 2024 AI Governance in Practice Report, respondents highlighted the role that frameworks, standards, regulations and norms have played to help businesses build and adopt AI. 

The importance of context is a common theme in this report, and several of our other AI governance resources, reflecting a consistent message we hear in the community: Companies would like to see more nuanced frameworks, standards, regulations and norms that are specific to their use of AI, even in the absence of an overarching legislative framework.  

Implementing AI literacy

With our feedback to Canada, all roads lead back to the people doing the work. Whether thinking about research, trust, infrastructure and commercialization challenges, no matter how good AI systems are and will be in the future, you always need a human involved every step of the way. 

Finally, we shared that with AI literacy requirements in effect for the EU AI Act, IAPP community members have contributed significant content on how to build and advance AI literacy. Key articles include: Designing an AI literacy program, Assessing AI literacy needs, Maturing the AI literacy program, and Integrating AI literacy into compliance frameworks.

If your organization has not already implemented a strong AI literacy program as part of your AI governance efforts, these are great articles to help you get started, whether you are operating in the EU, Canada or elsewhere. 

Related articles: 

Ashley Casovan is the managing director for the IAPP AI Governance Center.

This monthly column originally appeared in the AI Governance Dashboard, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.