Governance practices for artificial intelligence systems and the regulatory scrutiny Canadian and global regulators are currently applying to the technology were major discussion items at the IAPP Canadian Privacy Symposium 2024 in Toronto.

Privacy Commissioner of Canada Philippe Dufresne set the tone for the CPS breakout sessions in his opening keynote address, in which he said regardless of the breakneck pace AI technology is advancing, developers and deployers of various models need to be cognizant that they adhere to existing privacy and civil rights laws.  

"Protecting privacy with maximum impact means using existing laws to address the new increasing challenges," Dufresne said in his keynote remarks, which is one of the Office of the Privacy Commissioner of Canada's key goals as part of its 2024-27 strategic plan.

During a breakout session focusing on the approach the OPC is taking toward regulating AI technologies, OPC Senior Technology Advisor Vance Lockton, CIPP/C, CIPM, said the agency and data protection authorities around the world are primarily gearing their enforcement strategies toward the developers of foundational AI models, despite the general lack of AI-specific legislation around the world until the EU AI Act enters into force.

For the time being, Lockton said, the primary concern regulators will focus on for organizations looking to integrate generative AI technologies is ensuring they have the lawful basis to deploy systems to meet their business objectives. A major point of scrutiny in an organization establishing a lawful basis is if the model they intend to deploy web scrapes to process training data.

"When you're scraping potentially sensitive information from a wide range of sites, like if it is indiscriminate scraping, you're probably not going to pass a genuine interest test," Lockton said. "You're also going to have to pass the test of a reasonable person finding the collection and use is done for appropriate purposes, and that's not a nontrivial test.

As the organizations throughout the world seek to integrate AI, how the models are managed while safeguarding personal information was at the top of mind for privacy professionals at CPS who are increasingly being tasked with overseeing the privacy-related elements of the deployment of AI systems.

During the closing keynote panel moderated by IAPP AI Governance Center Director Ashley Casovan, the topic of Canada's place in helping formulate global best practices for AI governance was a topic of focus. The country was one of the first developed nations to lay the groundwork in the field of AI governance when the Canada Institute for Advanced Research first released a national AI strategy in 2017.    

Vector Institute Chief Data Officer Roxana Sultan said Canada's approach to technological innovation and regulation has been key in attracting top AI talent from around the world. Beyond conducting research on the "latest and greatest (AI) innovation," Sultan indicated Canadian technology researchers are also focusing on "building out the technical framework for privacy, enhancing technologies, and evaluative frameworks to assess and measure how safe or how trustworthy a particular AI innovation may be."

"We have all of the infrastructure that we've built out … to create the environment and the context for these people to succeed, and to have the ingredients that they need to continue to move the needle when it comes to AI and machine learning innovation," she added. "We can undertake very robust work to assess the safety of (a) model to measure for any potential risk of bias or data drift or even model degradation over time."

To leverage Canada's technological research effectively into best business practices, the need for a robust AI governance program is paramount.

In a breakout session featuring a discussion on creating an organizational AI governance program, Saputa Legal Affairs Director and Privacy Officer Sarah Lefebvre, CIPM, said the first step toward establishing an AI governance program is understanding the core business objective and then determining what AI tools could be used to enhance their competitiveness.

Companies should pause and evaluate the ripple effects of the AI solutions they are considering, according to Lefebvre. The rush to "jump on the boat" and integrate solutions without thinking through issues could result in issues ranging from reputational harms to data privacy risks.

If a given company is dealing with other priorities beyond AI integration, Lefebvre opined it is perfectly fine to "wait for the next boat," and on-board AI technologies when the time is right.

"Think about how you want to mitigate and get a bit ahead of the curve," Lefebvre said. "You may not be sitting in a role where you can drive innovation as a privacy professional and you think about compliance. But getting around the table with discussing it with other people within company is really going to help maybe not miss the boat or take the next boat."

Alex LaCasse is a staff writer for the IAPP.