While global jurisdictions are taking different approaches to tackling artificial intelligence, the debate over innovation versus regulation is a common thread. Though Canada is not shying away from the balancing act, it does not yet have consensus on the means to find equal footing.
Prime Minister Mark Carney emphasized plans for Canada's AI transformation while on the campaign trail and recently appointed a federal AI minister to his cabinet who will oversee the implementation of those endeavors. What remains unclear is the regulatory balance, as provinces begin to address AI safety and governance in the absence of federal action.
The Parliament of Canada worked toward an answer with proposed AI regulation packaged into the omnibus Bill C-27, which was considered across two legislative cycles. Lawmakers held countless hearings on the bill during the last legislative session, including debate over whether AI regulation required separation from the digital omnibus that would simultaneously update Canada's federal private-sector privacy law.
Bill C-27 was ultimately abandoned with the legislative docket cleared ahead of national elections, where the Liberal Party maintained control of the government.
Privacy Commissioner of Canada Philippe Dufresne told IAPP Canada Privacy Symposium attendees he expects AI regulation "will again become a legislative priority" with the aim to "ensure that Canadians remain protected in a modern world and to provide a framework within which the digital economy can thrive."
"AI holds incredible promise in advancing innovation, efficiency, and convenience — and for Canadians to fully embrace it, they must have confidence that it is being developed and deployed in a responsible and privacy preserving manner," Dufresne said in his CPS keynote remarks.
The 45th Parliament begins 26 May. There is not yet an indication if Bill C-27 will be reintroduced or if lawmakers might opt for a different legislative vehicle to cover AI.
AI and beyond
Despite the parliamentary debate whether potential AI and privacy regulatory overlap makes sense, much attention is being paid to how safety and transparency measures can cover multiple digital fields, including AI.
Dufresne described the current crossroads as "a pivotal moment" in a growing digital economy where the Canada "cannot be left behind" and "must be at the table." That urgency has led Dufresne into domestic and international partnerships that aim to "develop and champion privacy principles that will support AI technologies that are designed and built on a solid foundation."
Specifically on the international level through recent G-7 Data Protection Authorities Roundtables, the Office of the Privacy Commissioner is building out the blueprint for how AI requires privacy considerations.
"Our work has also emphasized the need for developers and providers of generative AI to embed privacy in the design, conception, operation, and management of new products and services, and that they consider the unique impact that these tools have on children as well as on groups that have historically experienced discrimination or bias," Dufresne said.
Provincial efforts
There is potential for fragmentation on AI regulation in the face of federal inaction. Provincial governments are beginning to turn the gears, with Ontario and Quebec boasting new requirements around various AI development and use.
Ontario's Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, places guardrails around public-sector AI development and use. Quebec's Law 25 is a modernization of private-sector privacy law that contains associated AI provisions, including those covering automated decision-making technology.
A CPS breakout session dove into the perceived peaks and valleys in Ontario's AI governance landscape, which includes the Responsible Use of Artificial Intelligence Directive for public entities in addition to Bill 194.
According to Ontario's new law, covered entities will follow "requirements to provide information, to develop and implement accountability frameworks and to take steps respecting risk management." Additionally, the statute also calls for human oversight and promulgation of technical standards where necessary.
Bill 194 is not yet full in effect, with final regulations required to implement AI provisions.
The directive, which took force December 2024, is similarly prescriptive with six responsible use principles and separate requirements. AI risk management, disclosure and reporting, and transparency around purpose comprise the requirements.
Melissa Kittmer, assistant deputy minister of strategic policy at Ontario's Ministry of Public and Business Service Delivery and Procurement, highlighted the differences between the law and directive. She noted the directive has wide applicability across public entities' AI deployments while the law exclusively covers AI in cybersecurity and data protection contexts.
Notably with the directive, Kittmer pointed to the definition for "AI use case" as the linchpin for "the framework around when requirements come into play." She also defended the effectiveness of the directive, which isn't viewed a stringent as law or regulation.
"It's not just a guidance," Kittmer said, noting legislation spawned the directive. "If you're issuing guidance, it's sort of saying, 'Here are best practices or things you should consider.' A directive does set out requirements."
University of Ottawa Canada Research Chair in Information Law and Policy Teresa Scassa acknowledged Ontario's initiatives for AI guardrails are "important steps," but she still believes there's "room for more" that could help inform federal measures. She zeroed in on transparency, specifically highlighting "the need to show your work" and how essential it is to increase public trust the government's AI endeavors.
"I think that's another piece of the puzzle. The ability to look under the hood to some extent, see what's happening, follow what's happening and criticize what's happening," Scassa said. "Public engagement is good, but there has to be enough space and time for the public to engage, and for that engagement to be taken on board and make a difference."
Joe Duball is the news editor for the IAPP.