This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.
Published: June 2024
Navigate by Topic
Despite its population of only 40 million, Canada has a track record of developing AI capabilities and talent. The country hosts numerous impactful startup accelerators, world-class researchers and universities dedicated to fostering a vibrant AI culture. Notably, it is home to several of the "godfathers of AI," including Geoffrey Hinton and Yoshua Bengio, who won the Turing Award in 2018 for their formative research on deep learning along with Yann LeCun.
In 2017, Canada became the first country to launch an AI strategy, seeking to understand the implications and opportunities these powerful technologies can have on its economy and society. A cornerstone of the Pan-Canadian AI strategy is the work led by the Canadian Institute for Advanced Research. In close partnership with world-class national AI research institutes the Montreal Institute for Learning Algorithms, Vector Institute and the Alberta Machine Intelligence Institute, the vision of the AI strategy is to make Canada one of the world's most vibrant AI ecosystems.
Recognizing Canada's innovative potential, the federal government, provincial governments, civil society organizations and industry have been active in seeking to create the necessary frameworks within which innovation can flourish safely and responsibly.
History and context
The federal government sets national AI standards and policies, while provinces handle localized issues like data privacy. In 2017, the federal government launched the first phase of its Pan-Canadian AI Strategy with a CAD125 million investment focusing on three pillars:
- Commercialization, which involves transitioning AI research into practical applications for the private and public sectors.
- Standards, which focus on developing and adopting AI standards.
- Talent and research, which aim to foster academic research and enhance computing capacity for AI advancements.
In 2019, two years after launching phase one of its Pan-Canadian AI Strategy, Canada announced its Digital Charter. This charter outlines 10 principles to guide the federal government's digital and data transformation efforts, with AI playing a crucial role.
In 2022, phase two of the strategy was implemented, adding over CAD433 million to the overall budget to be utilized over the course of 10 years. The importance of AI was underscored when Bill C-27, also known as the Digital Charter Implementation Act, was introduced to Parliament that same year. The act includes three key components: privacy reform, the establishment of a Personal Information and Data Protection Tribunal, and the introduction of a comprehensive AI and Data Act.
While concerned about the domestic implications of AI, the country also played a significant role in turning international attention and activity toward collectively working to develop AI in a responsible manner grounded in human rights. As such, Canada, along with France, was an initial driving force behind the Global Partnership on AI, a multistakeholder forum with 29 participating member nations.
Understanding the importance of leading by example, it was the first country in the world to create any AI-specific legally binding instrument. With a focus on the government's use of AI, the Directive on Automated Decision-Making was launched in 2019. Designed as a risk-based policy now popularized by the likes of the EU AI Act, the DADM requires the use of a standardized algorithmic impact assessment tool to determine the risk of the system, allowing for better alignment of risk-appropriate obligations. Many of the concepts and key requirements of this policy are similar to those in related policies published today. Making the distinction between automated decision-making versus other types of AI, additional questions about the other policies may be useful. In 2023, with the same public sector scope, the government released guidelines for generative AI.
Recognizing the need to continue to build on this policy suite in light of the ever-changing nature of AI technologies, the federal government hosted a roundtable to develop an AI strategy for the public service. This strategy focuses on three main areas: building an AI-ready workforce and fostering AI growth through innovation, enabling infrastructure and engagement, and implementing tools for responsible and effective AI adoption.
While focusing on government use of AI, in 2023 the country brought together key industry actors to commit to a voluntary code of conduct for the safe and responsible use of generative AI. These concepts are aligned with similar international efforts like the Bletchley Declaration, a key agreement completed during the first AI Safety Summit hosted by the U.K.
To complement the existing efforts of the Pan-Canadian AI Strategy, the 2024 federal budget allocated CAD2.4 billion to advance AI with an eye on both internal use and external oversight. Of the budget, CAD2 billion is dedicated to a new AI Compute Access Fund as well as funding for a safety institute and advancement of sectoral research. This fund aims to invest in Canadian-made computing infrastructure to support AI businesses and researchers.
Approach to regulation
Canada is following the growing trend of regulating AI based on risk, similar to the EU AI Act. In 2022, the federal government introduced Bill C-27. Part III of this bill, the AI and Data Act, would establish a risk-based framework for regulating AI systems. Numerous amendments were proposed by late 2023 and are still under discussion. Below is a summary of the key concepts incorporated into the AIDA.
Similar to the EU, Canada's approach to legislating AI seeks to balance protecting rights with fostering innovation. The AIDA aims to regulate trade "by establishing common requirements, applicable across Canada, for the design, development, and use of (AI) systems" and to avoid harm by prohibiting certain conduct in relation to AI systems with a specific focus on "high-impact systems."
The AIDA proposes the following approach:
- Building on existing consumer protection and human rights laws, the AIDA would ensure high-impact AI systems meet established safety standards. Regulations defining high-impact AI systems and their requirements are to be developed with input from a broad range of stakeholders including the industry and public to avoid overburdening the country's AI ecosystem.
- The Minister of Innovation, Science and Industry would be empowered to administer and enforce the act, ensuring policy and enforcement evolve with technology. A new AI and Data Commissioner would be established as a center of expertise to support regulatory development and administration of the act.
- New criminal law provisions would prohibit reckless and malicious uses of AI that would cause serious harms to Canadians.
At this time the AIDA does not ban certain AI uses outright, as the EU AI Act does. Instead, it classifies AI systems into high-impact categories, imposing stricter risk management, transparency obligations and accountability frameworks for those who make such systems available.
-
expand_more
High-impact AI systems
The AIDA defines several high-impact uses of AI systems that carry significant responsibilities for both providers and deployers of these systems. These use cases include:
- Employment: AI systems used for critical employment determinations such as recruitment, hiring, renumeration, promotion and termination.
- Service provision: AI systems that decide whether to provide services to individuals, what type or cost of services to offer, and how these services should be prioritized.
- Biometric processing: AI systems that process biometric information without an individual's consent or use biometric information to assess an individual's behaviour.
- Content moderation or prioritization: AI systems used to moderate content on online communications platforms or prioritize the presentation of such content.
- Health care: AI systems used in health care delivery or emergency services.
- Justice: AI systems used by a court or administrative body in making determinations about individuals who are parties to proceedings before the court or administrative body.
- Law enforcement: AI systems used to assist a peace officer, as defined under Canada's Criminal Code, in the exercise and performance of their law enforcement duties.
-
expand_more
Establishing requirements for providers of high-impact AI systems
The AIDA also establishes various requirements for high-impact AI systems before they can be used in international or interprovincial trade and commerce for the first time, including:
- Assessing potential adverse impacts from intended or foreseeable uses of the system.
- Implementing measures to assess and mitigate risks of harm or biased output.
- Testing the effectiveness of these mitigation measures.
- Including features that allow human oversight of the system's operations as outlined in the regulations.
- Ensuring the system performs reliably and as intended.
- Keeping specific records demonstrating compliance with these requirements, including records related to data and processes used in developing the AI system.
Additional requirements also apply to AI systems that rely on machine-learning models and those making changes to high-impact AI systems.
-
expand_more
Establishing requirements for those operating high-impact AI systems
For those managing the operations of high-impact AI systems, requirements include:
- Establishing measures to identify, assess and mitigate risks of harm or biased output.
- Testing the effectiveness of these mitigation measures.
- Ensuring human oversight of the system's operations.
- Allowing users to provide feedback on the system's performance.
- Keeping logs and records of the AI system's operations.
- Ceasing operations if there are reasonable grounds to suspect the system has caused serious harm or the mitigation measures are ineffective and notifying the AI and Data Commissioner.
-
expand_more
General-purpose AI systems
The AIDA also establishes additional requirements for a general-purpose AI system, which is defined as "an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes, and activities not contemplated during the system’s development." These additional requirements, which must be met by those making these systems available, include:
- Meeting certain requirements with respect to the data used to develop the system.
- Assessing potential adverse impacts from intended or foreseeable uses of the system.
- Implementing measures to assess and mitigate risks of harm or biased output.
- Testing the effectiveness of these mitigation measures.
- Including features that allow human oversight of the system's operations as outlined in the regulations.
- Including plain-language descriptions for the systems capabilities, the risks of harm or biased output, and any other information prescribed by regulation.
- If the system generates digital output consisting of text, images or audio, ensuring best efforts have been made so members of the public can identify the output as being generated by an AI system.
- Keeping records that demonstrate requirements have been met and records related to the data and processes used to develop the general-purpose system and assess its limitations and capabilities.
- Confirming an assessment has been carried out by a third-party to ensure compliance with the requirements outlined in the regulations.
In addition to federal legislative efforts, industry-specific regulators are also updating their guidelines and requirements. For instance, the Office of the Superintendent of Financial Institutions has released a draft guideline on model risk management. Currently under consultation and expected to take effect 1 July 2025, these new guidelines will establish practices and expectations for managing the risk of models used by financial institutions, which now include AI and machine-learning methods.
To support these sectoral regulations, Canada is investing significant efforts in both domestic and international standards development for AI. As seen through the establishment of an AI and Data Standardization Collaborative, the federal government recognizes the role standards will play in establishing global norms and common best practices for the appropriate development and use of AI. Through the national standards body the Standards Council of Canada, the federal government has played a significant role in the International Organization for Standardization's developments for AI. Specifically, it was one of the initial drafters of the ISO/IEC 42001 standard.
Other guidance in AI and automated decision-making includes Health Canada's guidance document on using software as a medical device, the federal government's Guide on the use of generative AI for government institutions and the Office of the Privacy Commissioner of Canada's Principles for responsible, trustworthy, and privacy-protective generative AI technologies.
Wider regulatory environment
There are numerous enacted laws of relevance and application to various elements of the AI governance life cycle. The Personal Information Protection and Electronic Documents Act sets out important rules for how businesses use personal information. To modernize this law for the digital economy, the Consumer Privacy Protection Act was proposed as part of Bill C-27. The government is also working to ensure laws governing marketplace activities stay current.
-
expand_more
Data privacy and protection
The Digital Charter Implementation Act introduces the AIDA and overhauls the PIPEDA through the Consumer Privacy Protection Act.
Combining privacy and AI regulation makes sense because data is the key link between them. The CPPA requires organizations to explain any prediction, recommendation or decision made by an automated system that significantly impacts individuals. This explanation must include the type of personal information used.
The CPPA also includes exceptions to consent for legitimate interest. However, it is unclear if this extends to using data to train AI systems. Under this exception, organizations must identify and take reasonable measures to minimize adverse effects from using data for this purpose.
-
expand_more
Copyright and intellectual property
The AIDA does not currently address copyright issues. Instead, it appears the government aims to tackle AI and intellectual property issues through an updated Copyright Act. In 2021, before the launch of many generative AI tools, Canada began consulting on Copyright Act updates. With rapid advancements in AI, especially generative AI, another public consultation began in December 2023.
The federal government aims to adapt the current copyright regime to address challenges posed by generative AI systems, which can produce creative content mimicking that created by humans. This raises concerns about the uncompensated use of protected works in training these AI systems, attribution and remuneration for AI-generated content, and enforcing the rights of copyright holders. Key discussion points of this consultation included text and data mining, authorship and ownership, and liability.
-
expand_more
Consumer protection and human rights
Given the risks to human rights, including discrimination, federal, provincial and territorial human rights laws play a crucial role in protecting individuals from AI-related harms. Redress and contestability mechanisms for discrimination, like those featured in Quebec's Law 25, are important, but individuals affected by AI discrimination may be unaware it has occurred. In 2021, the Law Commission of Ontario, the Ontario Human Rights Commission and the Canadian Human Rights Commission launched a joint research and policy initiative to examine human rights issues in AI development, use and governance.
Regarding consumer protections, the Canada Consumer Product Safety Act and various provincial consumer protection laws address issues like misrepresentation and undue pressure while remaining technology neutral. Updates to Ontario's consumer protection legislation, Bill 142, provide insight into potential future changes. This bill maintains a technology-neutral approach but incorporates updates reflecting the current digital landscape. Key proposed changes include new provisions on automatic subscription renewals, unilateral contract amendments and easier mechanisms for consumers to unsubscribe from services. These amendments aim to enhance transparency and fairness in consumer transactions, especially those occurring online or through automated means.
-
expand_more
Competition
The Competition Bureau of Canada is actively engaged in the discussion around the intersection of AI and competition. In May 2024, it published a discussion paper setting out considerations for how AI may affect competition. Key topics analyzed as a part of the paper include barriers to entry, product differentiation and market power, economies of scope and scale, network effects and competitive conduct, and consumer protection.
Additionally, several other frameworks apply to AI use, including:
- The Canada Consumer Product Safety Act
- The Food and Drugs Act
- The Motor Vehicle Safety Act
- The Bank Act
- The Canadian Human Rights Act, and other provincial and territorial human rights laws
- The Criminal Code
Next steps
The AIDA aims to proactively identify and mitigate risks to prevent harms from AI systems. As AI technology evolves, new capabilities and uses will emerge, requiring a flexible approach. As of June 2024, the AIDA has passed the second reading in the House of Commons, with one more reading pending, followed by three readings in the Senate.
Despite extensive proposed amendments and calls to separate the AIDA from the CPPA and the PIPEDA, it is seen by many as a significant step toward providing certainty for AI development and implementation. With a clear federal strategy in place, supported by some mandatory and many voluntary guidelines, reaching consensus on key aspects of AI governance looks to be in reach for Canada. However, even if the AIDA was to pass today, there would be a lengthy implementation timeline, likely venturing into late 2025 at the earliest.
Special thank you to Kathrin Gardhouse for her contribution to the development of this article.
Additional resources
-
expand_more
General AI resources
-
expand_more
Privacy and AI governance resources
Global AI Governance Law and Policy: Jurisdiction Overviews
The overview page for this series can be accessed here. The full series is additionally available here in PDF format.