I know the big news this week is the Office of the Privacy Commissioner releasing their special report to Parliament on law enforcement’s use of facial recognition. It comes on the heels of the rather scathing report a few months ago, when the commissioners in Canada admonished Clearview AI for their role in amassing billions of photos of people without their consent. It’s an important issue and I encourage you to read more about it below.
For me, the report comes at a time when I find myself working with clients who are constantly adopting new technologies that are becoming more and more sophisticated. Of course, I’m getting involved because the technologies involve the collection and processing of large amounts of personal information and my clients want to ensure they are being legally compliant. My favorite clients are also not just trying to meet the legal standard but trying to do what’s best. And doing what’s best is often a balancing act between legitimate business interests and protecting the personal information of customers.
While facial recognition technology has us pushing the bounds of what is legal and ethical, it is just scratching the surface of what some of our predictive analytical technologies are going to produce in the next months and years.
When interacting with your favorite retailer online, they will be using technological tools to better predict what you want to see and how you might use their site. These tools will also figure out what type of promotion will work best for you versus someone else. This means some people will be treated differently — getting one type of promotion versus another — based on some computer algorithm that is constantly analyzing us.
This had me thinking recently about the OPC’s guidance on no-go zones. I know the privacy commissioner said it will always be unreasonable to use or process personal information in a way that results in discriminatory practices, and I certainly don’t dispute that.
But I think we need to flesh out, better understand and ensure everyone’s on the same page as to what is meant by the term. Are we to apply it in the sense of what we have in our human rights legislation or something broader? If I’m offered a discount on a running application but my son doesn’t get the same offer because I run more than he does, is that going to be a problem?
If our AIs differentiate us based on their algorithms and we get treated differently from one another because of it, might we be creating a new ground of discrimination? Just a few more things we need to think carefully about — and get right.
If you want to comment on this post, you need to login.