This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.

Published: June 2024Click To View (PDF)

Navigate by Topic

Despite its population of only 40 million, Canada has a track record of developing AI capabilities and talent. The country hosts numerous impactful startup accelerators, world-class researchers and universities dedicated to fostering a vibrant AI culture. Notably, it is home to several of the "godfathers of AI," including Geoffrey Hinton and Yoshua Bengio, who won the Turing Award in 2018 for their formative research on deep learning along with Yann LeCun.

In 2017, Canada became the first country to launch an AI strategy, seeking to understand the implications and opportunities these powerful technologies can have on its economy and society. A cornerstone of the Pan-Canadian AI strategy is the work led by the Canadian Institute for Advanced Research. In close partnership with world-class national AI research institutes the Montreal Institute for Learning Algorithms, Vector Institute and the Alberta Machine Intelligence Institute, the vision of the AI strategy is to make Canada one of the world's most vibrant AI ecosystems.

Recognizing Canada's innovative potential, the federal government, provincial governments, civil society organizations and industry have been active in seeking to create the necessary frameworks within which innovation can flourish safely and responsibly.

History and context

The federal government sets national AI standards and policies, while provinces handle localized issues like data privacy. In 2017, the federal government launched the first phase of its Pan-Canadian AI Strategy with a CAD125 million investment focusing on three pillars:

  • Commercialization, which involves transitioning AI research into practical applications for the private and public sectors.
  • Standards, which focus on developing and adopting AI standards.
  • Talent and research, which aim to foster academic research and enhance computing capacity for AI advancements.

In 2019, two years after launching phase one of its Pan-Canadian AI Strategy, Canada announced its Digital Charter. This charter outlines 10 principles to guide the federal government's digital and data transformation efforts, with AI playing a crucial role.

In 2022, phase two of the strategy was implemented, adding over CAD433 million to the overall budget to be utilized over the course of 10 years. The importance of AI was underscored when Bill C-27, also known as the Digital Charter Implementation Act, was introduced to Parliament that same year. The act includes three key components: privacy reform, the establishment of a Personal Information and Data Protection Tribunal, and the introduction of a comprehensive AI and Data Act.

While concerned about the domestic implications of AI, the country also played a significant role in turning international attention and activity toward collectively working to develop AI in a responsible manner grounded in human rights. As such, Canada, along with France, was an initial driving force behind the Global Partnership on AI, a multistakeholder forum with 29 participating member nations.

Understanding the importance of leading by example, it was the first country in the world to create any AI-specific legally binding instrument. With a focus on the government's use of AI, the Directive on Automated Decision-Making was launched in 2019. Designed as a risk-based policy now popularized by the likes of the EU AI Act, the DADM requires the use of a standardized algorithmic impact assessment tool to determine the risk of the system, allowing for better alignment of risk-appropriate obligations. Many of the concepts and key requirements of this policy are similar to those in related policies published today. Making the distinction between automated decision-making versus other types of AI, additional questions about the other policies may be useful. In 2023, with the same public sector scope, the government released guidelines for generative AI.

Recognizing the need to continue to build on this policy suite in light of the ever-changing nature of AI technologies, the federal government hosted a roundtable to develop an AI strategy for the public service. This strategy focuses on three main areas: building an AI-ready workforce and fostering AI growth through innovation, enabling infrastructure and engagement, and implementing tools for responsible and effective AI adoption.

While focusing on government use of AI, in 2023 the country brought together key industry actors to commit to a voluntary code of conduct for the safe and responsible use of generative AI. These concepts are aligned with similar international efforts like the Bletchley Declaration, a key agreement completed during the first AI Safety Summit hosted by the U.K.

To complement the existing efforts of the Pan-Canadian AI Strategy, the 2024 federal budget allocated CAD2.4 billion to advance AI with an eye on both internal use and external oversight. Of the budget, CAD2 billion is dedicated to a new AI Compute Access Fund as well as funding for a safety institute and advancement of sectoral research. This fund aims to invest in Canadian-made computing infrastructure to support AI businesses and researchers.

Approach to regulation

Canada is following the growing trend of regulating AI based on risk, similar to the EU AI Act. In 2022, the federal government introduced Bill C-27. Part III of this bill, the AI and Data Act, would establish a risk-based framework for regulating AI systems. Numerous amendments were proposed by late 2023 and are still under discussion. Below is a summary of the key concepts incorporated into the AIDA.

Similar to the EU, Canada's approach to legislating AI seeks to balance protecting rights with fostering innovation. The AIDA aims to regulate trade "by establishing common requirements, applicable across Canada, for the design, development, and use of (AI) systems" and to avoid harm by prohibiting certain conduct in relation to AI systems with a specific focus on "high-impact systems."

The AIDA proposes the following approach:

  1. Building on existing consumer protection and human rights laws, the AIDA would ensure high-impact AI systems meet established safety standards. Regulations defining high-impact AI systems and their requirements are to be developed with input from a broad range of stakeholders including the industry and public to avoid overburdening the country's AI ecosystem.
  2. The Minister of Innovation, Science and Industry would be empowered to administer and enforce the act, ensuring policy and enforcement evolve with technology. A new AI and Data Commissioner would be established as a center of expertise to support regulatory development and administration of the act.
  3. New criminal law provisions would prohibit reckless and malicious uses of AI that would cause serious harms to Canadians.

At this time the AIDA does not ban certain AI uses outright, as the EU AI Act does. Instead, it classifies AI systems into high-impact categories, imposing stricter risk management, transparency obligations and accountability frameworks for those who make such systems available.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

In addition to federal legislative efforts, industry-specific regulators are also updating their guidelines and requirements. For instance, the Office of the Superintendent of Financial Institutions has released a draft guideline on model risk management. Currently under consultation and expected to take effect 1 July 2025, these new guidelines will establish practices and expectations for managing the risk of models used by financial institutions, which now include AI and machine-learning methods.

To support these sectoral regulations, Canada is investing significant efforts in both domestic and international standards development for AI. As seen through the establishment of an AI and Data Standardization Collaborative, the federal government recognizes the role standards will play in establishing global norms and common best practices for the appropriate development and use of AI. Through the national standards body the Standards Council of Canada, the federal government has played a significant role in the International Organization for Standardization's developments for AI. Specifically, it was one of the initial drafters of the ISO/IEC 42001 standard.

Other guidance in AI and automated decision-making includes Health Canada's guidance document on using software as a medical device, the federal government's Guide on the use of generative AI for government institutions and the Office of the Privacy Commissioner of Canada's Principles for responsible, trustworthy, and privacy-protective generative AI technologies.

Wider regulatory environment

There are numerous enacted laws of relevance and application to various elements of the AI governance life cycle. The Personal Information Protection and Electronic Documents Act sets out important rules for how businesses use personal information. To modernize this law for the digital economy, the Consumer Privacy Protection Act was proposed as part of Bill C-27. The government is also working to ensure laws governing marketplace activities stay current.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

Additionally, several other frameworks apply to AI use, including:

Next steps

The AIDA aims to proactively identify and mitigate risks to prevent harms from AI systems. As AI technology evolves, new capabilities and uses will emerge, requiring a flexible approach. As of June 2024, the AIDA has passed the second reading in the House of Commons, with one more reading pending, followed by three readings in the Senate.

Despite extensive proposed amendments and calls to separate the AIDA from the CPPA and the PIPEDA, it is seen by many as a significant step toward providing certainty for AI development and implementation. With a clear federal strategy in place, supported by some mandatory and many voluntary guidelines, reaching consensus on key aspects of AI governance looks to be in reach for Canada. However, even if the AIDA was to pass today, there would be a lengthy implementation timeline, likely venturing into late 2025 at the earliest.

Special thank you to Kathrin Gardhouse for her contribution to the development of this article.

Additional resources

Global AI Governance Law and Policy: Jurisdiction Overviews

The overview page for the full series can be accessed here.

Credits: 3

Submit for CPEs