Resource Center / Resource Articles / Global AI Governance Law and Policy: Canada

 

Global AI Governance Law and Policy: Canada

This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in Canada. The full series can be accessed here.


Published: September 2025


Contributors:


Navigate by Topic

Despite its population of just over 41 million, Canada has a strong track record of developing AI capabilities and talent. The country hosts numerous impactful startup accelerators, world-class researchers and universities dedicated to fostering a vibrant AI culture. Notably, it is home to several of the "godfathers of AI," including Geoffrey Hinton and Yoshua Bengio, who won the Turing Award in 2018 for their formative research on deep learning along with Yann LeCun. In October 2024, Hinton was also awarded the Nobel Prize for Physics, further cementing Canada’s leadership in AI.

In 2017, Canada became the first country to launch an AI strategy, seeking to understand the implications and opportunities these powerful technologies can have on its economy and society. A cornerstone of the Pan-Canadian AI strategy is the work led by the Canadian Institute for Advanced Research. In close partnership with world-class national AI research institutes the Montreal Institute for Learning Algorithms, Vector Institute and the Alberta Machine Intelligence Institute, the vision of the AI strategy is to make Canada one of the world's most vibrant AI ecosystems.

Recognizing Canada's potential for technological advancement, the federal government, provincial governments, civil society organizations and industry have been active in seeking to create the necessary frameworks within which innovation can flourish safely and responsibly.


History and context

The federal government sets national AI standards and policies, while provinces handle localized issues like data privacy. In 2017, the federal government launched the first phase of its Pan-Canadian AI Strategy with a CAD125 million investment focusing on three pillars:

  • Commercialization, which involves transitioning AI research into practical applications for the private and public sectors. 
  • Standards, which focus on developing and adopting AI standards.
  • Talent and research, which aim to foster academic research and enhance computing capacity for AI advancements.

In 2019, two years after launching phase one of its Pan-Canadian AI Strategy, Canada announced its Digital Charter. This charter outlines 10 principles to guide the federal government's digital and data transformation efforts, with AI playing a crucial role.

In 2022, phase two of the strategy was implemented, adding over CAD433 million to the overall budget to be utilized over the course of 10 years. The importance of AI was underscored when the Digital Charter Implementation Act was introduced to Parliament that same year. The act includes three key components: privacy reform, the establishment of a Personal Information and Data Protection Tribunal, and the introduction of a comprehensive AI and Data Act.

While concerned about the domestic implications of AI, the country also played a significant role in turning international attention and activity toward collectively working to develop AI in a responsible manner grounded in human rights. As such, Canada, along with France, was an initial driving force behind the Global Partnership on AI, a multistakeholder forum with 29 participating member nations. In 2024, the Global Partnership on AI was integrated into the Organisation for Economic Co-operation and Development’s AI policy work now functioning alongside its AI Policy Observatory. Canada continues to host one of three GPAI secretariats through the International Center of Expertise in Montreal on Artificial Intelligence.

Understanding the importance of leading by example, Canada was the first country in the world to create an AI-specific legally binding instrument. With a focus on the government's use of AI, the Directive on Automated Decision-Making was launched in 2019. Designed as a risk-based policy now popularized by the likes of the EU AI Act, the DADM requires the use of a standardized algorithmic impact assessment tool to determine the risk of the system, allowing for better alignment of risk-appropriate obligations. Many of the concepts and key requirements of this policy are similar to those in related policies published today. Making the distinction between automated decision-making versus other types of AI, additional questions about the other policies may be useful. In 2023, with the same public sector scope, the government released guidelines for generative AI.

Recognizing the need to continue to build on this policy suite in light of the ever-changing nature of AI technologies, the federal government hosted a roundtable to develop an AI strategy for the public service. This strategy focuses on three main areas: building an AI-ready workforce and fostering AI growth through innovation, enabling infrastructure and engagement, and implementing tools for responsible and effective AI adoption.

In 2023, while focusing on the government's use of AI, the country brought together key industry actors to commit to a voluntary code of conduct for the safe and responsible use of generative AI. These concepts are aligned with similar international efforts like the Bletchley Declaration, a key agreement completed during the first AI Safety Summit hosted by the U.K.

To complement the existing efforts of the Pan-Canadian AI Strategy, the 2024 federal budget allocated CAD2.4 billion to advance AI with an eye on both internal use and external oversight. Of the budget, CAD2 billion is dedicated to a new AI Compute Access Fund as well as funding for a safety institute and advancement of sectoral research. This fund aims to invest in Canadian-made computing infrastructure to support AI businesses and researchers.


First minister of AI

In May 2025, Canada’s new Prime Minister, Mark Carney, announced a new cabinet and appointed Evan Solomon to be the first minister responsible for AI and Digital Innovation. While portfolio-specific mandate letters have not been made public, a mandate letter to all ministers has been shared to outline priorities. Many of this government’s priorities are shaped by current economic realities, including the need to foster stability and security, strengthen sovereignty and expand trade partnerships. These economic objectives are closely connected to the country’s innovation agenda.

For AI in particular, the direction is significant: while Canada has long been recognized as a global hub for cutting-edge research, it has historically struggled to commercialize home-grown AI products and scale them for both domestic use and international markets.

Additionally, we are starting to see similar portfolios in provinces. The province of British Columbia has appointed Minister Rick Glumac to be responsible for an Artificial Intelligence and New Technologies portfolio.


Investing in Canadian innovation

It is too early to report on the direction of this new Ministry. However, with recent funding announcements, it is clear Canada understands the economic stakes related to AI and is seeking to invest in capitalizing upon significant domestic capabilities to develop new AI technologies.

On 10 July, Solomon announced a nearly CAD100 million investment in partnership with Scale AI, a Canadian investment firm. This joint funding announcement was arranged to support 23 new projects across Canada ranging from logistics, supply chain management, health, and finance. The announcement was the first indication that Canada needs to support its AI innovators to secure future economic stability.


AI and the G7

At the beginning of 2025, Canada took on the role of the G7 Presidency. During the June Summit in Kananaskis, Alberta, the G7 leaders released a statement on AI for Prosperity. In addition to the recognition that AI would be an economic engine for all G7 nations in the future, this statement highlights the role of small and medium-sized enterprises, particularly how the SMEs will need support to adopt and develop new technologies. To assist in this goal, the creation of the G7 GovAI Grand Challenge was announced outlining that governments will work together to scale AI adoption faster. It includes a G7 AI Adoption Roadmap, which looks at public sector actions and ways that companies can adopt AI and scale their business. These efforts build upon the outcomes of the G7 Hiroshima AI Process.


Approach to regulation

In 2022, the federal government introduced Bill C-27. Aligned with global legislation trends at the time, this legislative framework proposed comprehensive oversight for AI to complement existing privacy (Part I) and consumer protection (Part II) legislation. Part III of this bill, the AI and Data Act, sought to establish a risk-based framework for regulating AI systems. While Bill C-27 made it to a second reading, the bill was ultimately not completed when the election produced a new administration in early 2025.

Similar to the EU, Canada's approach to legislating AI sought to balance protecting rights with fostering innovation. The AIDA aimed to regulate trade "by establishing common requirements, applicable across Canada, for the design, development, and use of (AI systems)" and to avoid harm by prohibiting certain conduct in relation to AI systems with a specific focus on "high-impact systems."

The AIDA did not outright ban certain AI uses, as the EU AI Act does. Instead, it classified AI systems into high-impact categories, imposing stricter risk management, transparency obligations and accountability frameworks for those who make such systems available.

At the provincial level, Québec and Ontario have taken notable steps toward regulating AI. Québec’s Law 25, a major privacy reform, includes requirements for transparency and safeguards around automated decision-making, making it one of the first provincial frameworks to directly address AI implications. In Ontario, Bill 194 passed in 2024; it focuses on strengthening cybersecurity and establishing accountability, disclosure, and oversight obligations for AI use across the public sector. In addition to legislation, some provinces have also released their own frameworks and principles to guide their use of AI, including Ontario and British Columbia.

Industry-specific regulators are also updating their guidelines and requirements. For instance, the Office of the Superintendent of Financial Institutions has released a draft update to its Model Risk Management Guideline (E-23). According to OSFI’s latest quarterly release, the finalized guideline is expected 11 Sept. If adopted in its current form, E-23 will set out enhanced expectations for how financial institutions manage model risk, explicitly extending to models that incorporate artificial intelligence and machine learning.

Additionally, several law societies across Canada, including those in Ontario, Alberta, Manitoba, Saskatchewan, and British Columbia, have released guidelines on the responsible use of AI in the legal profession.

To support these sectoral regulations, Canada is investing significant efforts in both domestic and international standards development for AI. As seen through the establishment of an AI and Data Standardization Collaborative, the federal government recognizes the role standards will play in establishing global norms and common best practices for the appropriate development and use of AI. Through the national standards body Standards Council of Canada, the federal government has played a significant role in the International Standards Organization's developments for AI. Specifically, it was one of the initial drafters of the ISO/IEC 42001 standard.

The Digital Governance Council is also a key player in setting AI standards in Canada. Through its accredited standards program, the DGC develops national guidelines for the responsible design, deployment and oversight of AI systems, helping organizations align with best practices in trust, safety and accountability.

Other guidance in AI and automated decision-making includes Health Canada's guidance document on using software as a medical device, the federal government's Guide on the use of generative AI for government institutions and the Office of the Privacy Commissioner of Canada's Principles for responsible, trustworthy, and privacy-protective generative AI technologies.


Wider regulatory environment

There are numerous enacted laws of relevance and application to various elements of the AI governance life cycle. The Personal Information Protection and Electronic Documents Act sets out important rules for how businesses use personal information. To modernize this law for the digital economy, the Consumer Privacy Protection Act was proposed as part of the Digital Charter Implementation Act, 2022. The government is also working to ensure laws governing marketplace activities stay current.

Additionally, several other frameworks apply to AI use, including:

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Agentic AI

Canada does not currently have legislation at the federal or provincial level that specifically regulates agentic AI. Given the potential broad applications of agentic AI, any potential future legislation on AI systems is expected to encompass these technologies within its scope.

At present, the government has developed the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. To support public servants in their use of AI, the federal government’s School of Public Service has released a course entitled “The Rise of Agentic Artificial Intelligence.”

Future of AI governance in Canada

It is unclear whether the current government will retable AI legislation. Given the vigorous debate related to the AIDA and current geopolitical context, it is unlikely AI legislation would look the same as it did in 2022. However, trust continues to be a significant barrier to the public’s adoption of AI in Canada. This remains a crucial factor to advancing AI adoption in any nation. This new government has provided a clear signal that AI and digital innovation are important, so it will be interesting to see how this shapes the Canadian AI governance landscape. Whether rules come in the form of additional support for industry-developed standards, top-down rules or sectoral-specific legislation developed by regulators remains to be seen.


Full series overview

Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.

Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.

Global AI Governance Law and Policy

Jurisdiction Overviews 2025

The overview page for this series can be accessed here.


Additional resources

  • expand_more



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 3

Submit for CPEs