Remember 10 years ago, when Netflix had a competition for anyone and everyone to use tons of its user data to come up with an algorithm to better predict viewer preferences? In releasing the data, Netflix said it had scrubbed it of PII, and it was therefore anonymous. But Arvind Narayanan, then a Ph.D student at The University of Texas, and a colleague discovered that the reportedly anonymized data could in fact be de-anonymized, and pretty easily, too. The findings thwarted Netflix's plans for a second, similar contest. In this episode of The Privacy Advisor Podcast, Narayanan discusses life as a researcher at Princeton, including his most recent work on online tracking, which found many sites weren't even aware of third-party tracking occurring on their pages, and that "adult sites" are among the most privacy protective.
2 Sept. 2016
The Privacy Advisor Podcast: Arvind Narayanan
Related stories
Even exempt organizations need to be data mapping: Here's why
AI as product vs. AI as service: Unpacking the liability divide in EU safety legislation
Not bias-free: Managing bias in generative AI with clarity and courage
European Commission proposes significant reforms to GDPR, AI Act
Ireland's DPC commissioner says long-term effects of cross-border enforcement harmonization uncertain
This article is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
