Like a rancher smelling profit in the stench of cattle, the privacy professional has a knack for sniffing out privacy lessons in every new development. True to form, as others consider the free speech, safety and equity implications of recent Twitter events, I see a series of potential privacy lessons.

A quick recap: Elon Musk’s long-held distaste for a Twitter account dedicated to amplifying the real-time location of his private jet came to a head this week, after he separately accused a “stalker” of interfering with a car carrying one of his children. As he posted a video of his interaction with the stalker and called on his followers to help identify the man, Musk also announced an update to his platform’s private information and media policy. “Any account doxxing real-time location info of anyone will be suspended, as it is a physical safety violation. This includes posting links to sites with real-time location info,” he tweeted. Immediately, the jet-tracking account was suspended. When other popular accounts, including journalists, began sharing alternate means to follow the jet on platforms like Facebook and Mastodon, they were also suspended.

As this story evolves, I can’t help but see Twitter grappling with a few enduring privacy lessons:

  1. Public is not the opposite of private. Publicly available information has often been exempted from privacy protections. But practitioners have long known the public-private distinction is a spectrum, not a binary. The internet, search technologies and advanced matching algorithms have made public information more readily accessible, removing it from highly contextual systems, like the Federal Aviation Administration’s flight records database, and reducing the effort required to obtain it. Deobfuscating the information, as Cornell Tech professor Helen Nissenbaum would put it. Setting aside the question of whether public figures have a reduced privacy interest, this example highlights the growing issue with relying solely on a private-public distinction when measuring privacy risks.
  2. Individual rights to privacy and freedom of speech exist in a balance. Even free speech purists, as Musk has described himself, agree there should be limits on the ability of individuals to amplify private information. Given the sensitivity of location information, the real-time location of another person certainly is worth considering as a limit. Defining the contours of such a limit is complicated in a world where amplification is limitless. Who is responsible for revealing the location of a private jet? The jet owner? The FAA? The person who tweets about it? The journalist who writes about the tweet?
  3. Almost everything is identifiable, with enough effort. Our modern world has eroded distinctions between the privacy invasiveness of, say, a set of geospatial coordinates and a photograph of a public space. Case in point, Bellingcat, an international collective of online armchair investigators, quickly geo-located Musk’s video of his alleged stalker by cross-referencing tiny details in the surroundings with publicly available street-view data. This highlights the shift from categorical ideas like “public information” to a focus on the purposes and means of processing data. A photograph, even stripped of metadata, can reveal a location given enough effort and available data.

Setting all advanced coursework aside, if rumors of planned policy changes prove correct, Musk’s Twitter could be on track to re-learn a few data privacy lessons too. Hints about requiring free users to opt in to targeted ads, and even reprising practices directly at issue in Twitter’s May settlement with the U.S. Federal Trade Commission, helped spark a rare interview with Samuel Levine, director of the FTC's bureau of consumer protection. Levine reminded Axios reporter Ashley Gold there is “no pause button” on FTC consent decrees. “As far as I’m concerned, it's up to companies to ensure they are constantly in compliance with the FTC and responding to our requests."

Keeping up with privacy lessons is a never-ending practice. And the stakes of getting it wrong will always be highest for the most popular platforms.

Here's what else I’m thinking about:

  • The Kids Online Safety Act has a non-zero chance of passing this year. After Sen. Richard Blumenthal, D-Conn., posted an updated version of his bill, rumors about efforts to include it in the omnibus spending bill, which must pass next week, ran wild. It is unlikely that the “four corners” of Congressional commerce committee leadership are all on board with including the bill, which was approved by the Senate Commerce Committee but never considered by the House committee. The Center for Democracy and Technology, one of the civil society groups that criticized KOSA, reviewed the updates and expressed renewed concern, writing it would “still push covered platforms to monitor & filter lawful content, & older teens will be unable to access websites/messaging apps without parental surveillance.”
  • Sen. Edward Markey, D-Mass., wants his bill included in the omnibus package instead. In a letter to leadership, Markey highlighted four areas of agreement that enjoy bipartisan support through both his bill, the Children and Teens’ Online Privacy Protection Act and the American Data Privacy and Protection Act. These are a ban on targeted ads to children, an extension of existing privacy protections for children to young teens, aged 13 to 17, the creation of a Youth Marketing and Privacy Division at the FTC and an FTC study of the COPPA safe harbor program to ensure it is protecting the interests of children.
  • Meanwhile, a new lawsuit seeks to block California’s Age-Appropriate Design Code. NetChoice’s suit to block implementation of the CAADC made waves in U.S. policy circles as it questions future AADC bills and other so-called “privacy plus” bills, i.e., privacy plus safety and privacy plus mental health, including KOSA. The complaint raises first amendment, fourth amendment and preemption arguments against the California law.
  • The European Commission released its draft adequacy decision for the EU-U.S. Data Privacy Framework. While I focused on this first glimpse of updates to the privacy principles for privacy shield businesses, my counterpart in Brussels, Isabelle Roccia, described initial reactions in the European policy world.
  • There are patterns in our FTC comments, according to this cogent Future of Privacy Forum summary of “points of emphasis” in submissions to the FTC’s rulemaking on commercial surveillance.
  • Cookies can reveal personal health information, according to guidance from the Office for Civil Rights at the U.S. Department of Health and Human Services. The office warns Health Insurance Portability and Accountability Act-covered entities and their partners to think twice before deploying online tracking technologies on their websites, as it could lead to the impermissible disclosure of protected health information to vendors.
  • Can fake images invade our privacy? Lensa, the viral personalized avatar app that leverages the open-source Stable Diffusion text-to-image generator, is the subject of an MIT Technology Review article describing the high incidence of sexualized results for women, particularly Asian women. It’s a reminder that intimate privacy violations can occur in the context of generated content. Prisma Labs, the makers of the app, released a blog addressing the steps they’ve taken to reduce “NSFW results.”

Under Scrutiny

    • Neustar, a data broker, is the subject of a letter from Sen. Ron Wyden, D-Ore., calling on the FTC to investigate its sale of internet metadata to a Department of Defense-funded research project at Georgia Tech.
    • XR technologies are the subject of a report from Access Now and the Electronic Frontier Foundation, calling on regulators to ensure that human rights protections extend to the metaverse.

Please send feedback, updates, and privacy lessons to cobun@iapp.org.