Remember 10 years ago, when Netflix had a competition for anyone and everyone to use tons of its user data to come up with an algorithm to better predict viewer preferences? In releasing the data, Netflix said it had scrubbed it of PII, and it was therefore anonymous. But Arvind Narayanan, then a Ph.D student at The University of Texas, and a colleague discovered that the reportedly anonymized data could in fact be de-anonymized, and pretty easily, too. The findings thwarted Netflix's plans for a second, similar contest. In this episode of The Privacy Advisor Podcast, Narayanan discusses life as a researcher at Princeton, including his most recent work on online tracking, which found many sites weren't even aware of third-party tracking occurring on their pages, and that "adult sites" are among the most privacy protective.Â
02 September 2016
The Privacy Advisor Podcast: Arvind Narayanan
![Default Article Featured Image_laptop-newspaper-global-article-090623[95].jpg](https://images.contentstack.io/v3/assets/bltd4dd5b2d705252bc/blt61f52659e86e1227/64ff207a8606a815d1c86182/laptop-newspaper-global-article-090623[95].jpg?width=3840&quality=75&format=pjpg&auto=webp)
Related stories
Notes from the IAPP Canada: Pan-Canadian IAPP KnowledgeNet explores practical application of AI governance
A view from DC: Some health data is extra super sensitive
A look ahead at digital policy in 2026 with Omer Tene
Notes from the Asia-Pacific region: Strong start to 2026 for China's data, AI governance landscape
A view from Brussels: How, when will the Omnibus yield results?

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
