The biggest privacy news story this week, in my mind, is the one covered by the IAPP’s Associate Editor Ryan Chiavetta about the collective and loud voice of all privacy commissioners in Canada speaking out against Clearview AI. It’s summarized and linked below and I encourage you to read it.
In a nutshell, Clearview AI scraped the internet for billions of photos of people to use in their facial recognition technology. They then sold that technology to a myriad of interested parties, including law enforcement from around the world. Apparently, the RCMP and other law enforcement bodies in Canada were customers, too.
Were the creators of this technology not the least bit creeped out by the movie "Minority Report?" I mean, it doesn’t take much in terms of good sense to think that what you’re doing might not be legal, not to mention ethical.
The Canadian commissioners, being somewhat powerless in levying any type of meaningful punishment, seem interested in trying to find other international jurisdictions that might want to strike at the company’s pocketbook. I’m curious to see if this will happen and not-so-secretly hoping that it does.
I’ve long been a proponent that privacy laws and privacy ethics do not need to be a barrier to doing good things. But the trick in my mantra is that you actually have to be doing good things. Indiscriminately collecting everyone’s photo without consent, using it to create biometrics and selling that to the highest bidder does not seem to be a good thing. Am I missing something?
Undoubtedly, the case is another example of why Canada’s privacy laws need reform (can we please move forward on this quickly before more organizations start thinking these types of data uses are permissible?). But more than that, I think this case ups the ante on why we need more people signing up for ethical uses of data.