The U.S. Federal Trade Commission “does not tolerate companies that over-collect, indefinitely retain, or misuse consumer data” and is “committed to fully enforcing the law against illegal use and sharing of highly sensitive data” such as location- and health-related information. In its most recent Business Blog post, FTC acting Associate Director for the Division of Privacy and Identity Protection Kristin Cohen sent a strong warning signal for businesses that continue to collect and share sensitive data. The blog stands apart from most FTC staff blogs in its persuasive style and name-and-shame approach to specific industries, e.g., the “often shadowy ad tech and data broker ecosystem.” The FTC is also clearly mindful of the timeliness of a warning about datasets that could include reproductive health inferences, coming close on the heels of President Joe Biden’s executive order calling on the commission to “consider taking steps to protect consumers’ privacy when seeking information about and provision of reproductive health care services.”

Business claims about “anonymized” data are one of the practices highlighted in the blog as particularly problematic and, in fact, “often deceptive.” Organizations should take note of this clear warning and revisit any claims they make about anonymous and deidentified data — in their privacy notices or elsewhere. The idea that claims about anonymization are difficult to prove is not new. Back in 2009, Georgetown University Law professor Paul Ohm described the "Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization." Since then, even as privacy-enhancing technologies continue to advance (including new capabilities to deidentify datasets while preserving utility) so too does our capacity to combine datasets and reverse efforts at deidentification.

At an even more basic level, companies should avoid claims about anonymization for the same reason they avoid claims about any ambiguous or controversial term: a lack of widely accepted meaning can lead to charges of deception. Recently, policy voices have pointed out a lack of consensus around the terms “anonymized” and “deidentified,” including differences in understanding about which of the two conveys a stronger state of protection. For starters, it is clear that deidentification is not anonymization, and the Future of Privacy Forum has a helpful chart that attempts to clarify and standardize these terms. But even before making claims about deidentification, it is worth verifying that your technical practices align with government guidance such as:

Here's what else I’m tracking:

  • The House Energy and Commerce Committee is expected to host a markup next week (week of July 21) which will likely cover the American Data Privacy and Protection Act. The markup has not been formally scheduled.
  • California’s privacy regulator wrote a letter to U.S. House Speaker Nancy Pelosi, D-Calif., critiquing the draft ADPPA. Some have suggested the letter, penned by California Privacy Protection Agency Deputy Director of Policy and Legislation Maureen Mahoney, is not only worth reading for the substantive critiques that it lodges about the ADPPA, but also for the insights it provides into how the CPPA is thinking about its own regulatory powers and the rules it plans to enforce, such as the agency’s claims about a right to opt out from automated decision-making.
  • The “era of the global internet is over,” according to a report on confronting reality in cyberspace from the Council on Foreign Relations, which prescribes a variety of policy recommendations, including entry of new trade agreements and the adoption of “a shared policy on digital privacy that is interoperable with Europe’s General Data Protection Regulation.”
  • The June FTC report to Congress on using artificial intelligence for innovation is more about not using AI, urging “great caution” about relying on algorithms as a policy solution. Congress had directed the FTC to examine ways that AI “may be used to identify, remove, or take any other appropriate action necessary to address online harms.” Instead, the report reminds policymakers that harms are not always lessened by AI tools that can be inaccurate, biased, and discriminatory, and incentivize relying on large amounts of data collection. Commissioner Noah Phillips issued a dissenting statement describing why he believed the FTC missed the assignment.
  • Speaking of AI, the Mozilla Foundation’s next season of its IRL podcast will focus on AI in Real Life.

Under scrutiny

Amazon has responded to U.S. Sen. Edward Markey's, D-Mass., most recent inquiry into its Ring home security products and its practices for responding to law enforcement requests.

Upcoming happenings

  • July 20 at 12:30 p.m. EDT, Kelley Drye hosts a webinar on How To Protect Employee/HR Data and Comply with Data Privacy Laws (virtual).
  • July 20 at 2 p.m. EDT, CDT and SIIA host Democracy Affirming Technology: Restoring Trust Online (U.S. Capital Visitor Center).
  • July 21 at 1 p.m. EDT, the IAPP hosts a sponsored web conference on the State of US privacy: Countdown to compliance (virtual).
  • July 21 at 6 p.m. EDT, FPF hosts a DC Privacy Interns Happy Hour for interns and professionals to mingle (Franklin Square).

Please send feedback, updates and anonymous data to cobun@iapp.org.