Greetings from the IAPP Privacy. Security. Risk. 2023 conference in San Diego, California!
Since your faithful columnist is not in Washington, D.C., this week, it would seem wrong to pretend otherwise. Instead, I'm sharing here with you some themes from our West Coast gathering.
As always, it is a great feeling to be surrounded by the community of privacy professionals ready to share knowledge, network and maybe pick up some of the cool new IAPP stickers. Those privacy astrological sign stickers sure are popular!
It feels as though this year the keynote stage is fully dominated by women with big ideas about potential and peril of future technologies. Though I'm sure our Editorial team will bring in-depth recaps from the stage, please accept some initial reflections as an amuse-bouche.
Orly Lobel, author of the "Equality Machine," kicked things off with a wide-ranging presentation on how artificial intelligence can fuel a better future for humanity. It was nice to start on a positive note, though my takeaway from Lobel's presentation was an even keener awareness of the need for people to do the hard work to help make sure that new technologies are built to solve problems, rather than exacerbate them.
At one point, Lobel gave a glimpse of some social science literature on trust in technology.
Often in the policy world we speak of consumer trust as an important goal, but Lobel reminded us that a more powerful goal is to right-size trust, to trust "rightly and accurately." That is, we should find the sweet spot between what social scientists call either "algorithmic adoration" or "algorithmic aversion." The more we find ways to ensure we can trust technological systems when they deserve it — and not when they don't — the more we can build a better future.
There are also many possible exciting advances in what Nita Farahany describes as the field of neurotechnology. But she took the keynote stage to focus more on the perils of technological advances that could one day passively reveal the contents and qualities of our thoughts.
Farahany's book "The Battle for Your Brain" argues for a new fundamental human right to cognitive liberty. This right is called into question by advances in neurotechnology, which could eventually threaten privacy in ways that we have never seen. As Farahany put it on stage, "Brain data is the holy grail. It is the last piece of privacy that can be breached."
Farahany's speech ended with a call to action: "We need commercial design to better align with cognitive liberty."
The themes of both keynotes were nicely presaged by the remarks from IAPP CEO J. Trevor Hughes, CIPP, who took to the stage to kick off the conference with a story about brakes. With some fascinating historical flourishes, he reminded us that automobile brakes did not result in slower cars. Instead, they were the necessary precondition to make cars that could go fast.
Just like cars, AI and other advanced technologies will need to be designed and deployed with responsibility and safety at their core. Not only is this better for people, but it is actually the way to fuel innovation. Human choices will guide the direction in which we innovate. Whether we build technologies that respect cognitive liberty or challenge it is up to us.
This is precisely why the IAPP started to lead the charge to create a robust AI governance profession alongside the privacy profession. In case you missed it, Hughes' call to action this week brought the message home: The time to professionalize AI governance is now.
As always, being surrounded by the people who make up this vibrant, nimble and inclusive community bring me hope that we will get it right. But there's a lot of work still to do.
If you want to comment on this post, you need to login.