Hello everyone — Kris Klein is on vacation and has foolishly agreed to let me write the digest notes this week. 

ChatGPT is all anyone is talking about, so I thought I would too. As you plan for Bill 64 compliance in Quebec, which really kicks in in September, and, hopefully, Canada's Bill C- 27 later this year, keep in mind how these laws will impact your organization’s use of artificial intelligence. It is not clear how the Artificial Intelligence and Data Act will develop or even go forward, as the government agreed to separate the vote on the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act from AIDA in November to allow them to proceed separately, largely due to concerns over AIDA’s vagueness and reliance on regulation.

I asked ChatGPT to write a biography for me. It wrote a beautiful bio of me as a professor in computational linguistics at the University of Maryland, with degrees from prestigious universities like the University of California-Berkeley, all of it completely false — but it almost convinced me. Where did this come from? My name is kind of unique. As best I can tell, it would be the resume of one of its creators. It is not a new risk that we may trust what comes from the machine too much, more so when it is written so convincingly.

Software companies are rushing to introduce AI into systems meant to help us manage our time, select employees and support decision-making your organization may already be using. Consider the impacts on employment, credit or insurance, especially when biases from training sets work their way into conclusions that have dramatic consequences on individuals. The impacts will be on diverse areas like purpose limitations, consent, defining legitimate intents, and most obviously, the right to review automated decision-making. How do we explain results when we barely understand how the AI got there?

AI can be used for negative purposes. There is a lot of news coverage over deep fake videos and audio, permitting malicious parties to put words in your mouth you never uttered — could this be used to commit identity theft? I am wondering whether banks in Canada using voice recognition for identity validation have considered this.

And where does all this data used to train AI come from? Our participation in the internet seems to have become fodder not only for training systems for facial image recognition, but now every aspect of our behavior is cloneable, mimicked for the purposes of better selling to us, identifying us and more, none of which could have possibly been consented to, as few contemplated these uses until now.

As a thought experiment, I asked ChatGPT to describe how an evil privacy lawyer would advise a company bent on stealing data. It was remarkably accurate in its response, describing essentially a playbook of dark patterns: seemingly benign features that collect personal information through "customized experiences," using manipulative language and design techniques, disguising information collection as a benefit to users, and collecting information without knowledge or consent. It’s funny how right it was on this compared to my bio.

While AI has tremendous opportunities, it comes with challenges privacy professionals in Canada will struggle with under both current and pending laws. I asked ChatGPT to write a funny email signature line for a privacy lawyer, and it responded with: "Privacy is no laughing matter … except for my clients."

Funny, but not sure how to take that. I am pretty sure my preeminence as a privacy pro/humorist is not going to be overtaken by an AI.

Yet.