Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

My friend, half-time comedian and sometimes instigator, Omer Tene suggested in a LinkedIn post a few years ago that perhaps the artificial intelligence and privacy fields shouldn't mingle as much as some people — like the IAPP's J. Trevor Hughes, CIPP — suggest they should. By the way, Hughes and Tene are good friends, so Tene's initial position on this seemed to me to be, at the time, him wearing the instigator hat, at least in part. We all have that friend who likes to raise a little hell or at a minimum play the devil's advocate.

Fast forward to 2025 and there is now no doubt that the privacy and AI fields have intertwined. I think you can include cybersecurity in that mix, too, because so much about doing AI and privacy well is about securing your systems.

These professions are so connected today that the IAPP has dedicated staff to ensure cybersecurity and AI issues, news, training, resources, and more are available to our community.

The person leading up the AI charge is Managing Director of the IAPP AI Governance Center Ashley Casovan. I'm not sure how exactly the IAPP always seems to find the brightest, most strategic and best person for these jobs, but I can attest to how well Casovan is leading the IAPP's work to ensure AI governance professionals get what they need and deserve.

Yesterday, I picked up Casovan from the Ottawa train station because she arrived to present on a panel later in the day at a conference organized by the Canadian Internet Governance Forum. She focused most of her talk on the need for trust in the AI ecosystem. 

In my mind, there are two ways to build trust in something like this. The first is by having appropriate guardrails in place. Here, I'm not advocating what type of guardrails  — whether they be by law, regulation, guidance or industry best practice — but the need, generally, for some rules on how to deploy this technology in a way that it will not cause harm. 

In addition to all the great AI stuff out there, there are already too many examples of AI causing harm, and some of them even tragedies. So, I'm not sure what it's going to take for people to wake-up and say, yes, having seat belts in cars is a good thing. It does not stunt innovation. AI needs the same. 

The second is about trust in how AI systems are built, just like trust is built into privacy programs, by ensuring good governance. Having qualified people who can analyze the systems, identify risks and implement mitigation measures. This is part of the IAPP's mission and, hence, falls squarely on Casovan's shoulders to lead.

Casovan writes a similar introductory article on a similar theme for the IAPP AI Governance Dashboard that I would encourage you to read. If your life and job have evolved as much as mine to include these new emerging industries, I encourage you to also sign up for the AI Governance Dashboard. Like this one, it's issued weekly, but with the pace of change in the AI industry, maybe it will even become more frequent than that.

Kris Klein, CIPP/C, CIPM, FIP, is the country leader, Canada, for the IAPP. 

This article originally appeared in the Canada Dashboard Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.