Greetings from Portsmouth, New Hampshire!
It's amazing to think we're already in April. That means Q1 is in the rearview, the crocuses are in bloom, and our Global Privacy Summit is only weeks away. Like every year, IAPP HQ is abuzz with preparation for our annual flagship event. Have you checked out this year's programming yet? If you are thinking about registering, I'd do so soon because this event usually sells out.
Though GPS is an international event, there will naturally be lots of talk and education on what's happening legislatively here in the U.S. We have a wide range of programming focused on preparing for the California Consumer Privacy Act, including a daylong Active Learning Day session on "Understanding the CCPA." We also have excellent sessions on a prospective federal privacy law, as well as interviews with Federal Trade Commission Chair Joseph Simons and Commissioner Noah Phillips, and one with Bureau of Consumer Protection Director Andrew Smith.
As more discussion involving privacy legislation continues, two posts stood out to me this week. In a column for the Brookings Institution, Mark MacCarthy dives into regulatory issues involving artificial intelligence and machine learning. In it, he acknowledges the new data processing capabilities made possible by AI and ML and the effects they will have on individual privacy. "For this reason," he writes, "policymakers need to craft new national privacy legislation that accounts for the numerous limitations that scholars such as Woody Hartzog have identified in the notice and consent model of privacy that has guided privacy thinking for decades." But what about some of the specifics, and are there already some regulations in place that can cover some of the privacy issues raised by AI and ML? Should it regulate AI research? How will it address bias in AI, a growing concern among scholars in the field? And finally, MacCarthy addresses whether a new regulation should require explainability. "This trade-off between accuracy and explainability need not be dictated by a one-size-fits-all approach embodied in law," he writes.
In a separate post, the Center for Democracy & Technology's Joe Jerome tackles de-identification and privacy legislation, arguing that de-identification should not be "an automatic get-out-of-jail-free card." He notes that different techniques "vary in effectiveness, and may not work to hide individual identities in some cases." He also argues that in cases where de-identification does protect individual privacy, it may not prevent harm to groups of people, citing, for example, the Stava heat map incident that revealed sensitive locations of military personnel. His suggestions? Jerome writes that any privacy legislation "should not categorically exempt de-identified data from privacy and security requirements" and that legislation alone is not the only answer. Regulatory guidance and rulemaking may well have a place here, as well. Finally, he suggests a "trust, but verify" paradigm that brings together law and policy. This would include transparency and contractual agreements.
I'd be interested to hear your reaction to both posts.
Finally, I have to eat my predictions from last week. Clearly, I won't be winning our staff college basketball tournament bracket. I foolishly picked a Duke-versus-UNC final, and lo and behold, both teams were eliminated last weekend. Since there are still a couple of staffers with Virginia and Michigan State as the winning picks, I gotta root for an Auburn upset, though I wouldn't mind an MSU victory.