Some folks say we are as close as we have ever been to comprehensive data privacy legislation in the United States. The areas of legislative divergence are well known, and the landscape of common ground continues to be charted and defined. Nevertheless, as we race toward a contentious mid-term election (just over six months away), political realities lend ammunition to the skeptics among us. It’s a bit like solving a calculus equation — lawmakers get ever closer to the finish line, but does the limit exist? That remains to be seen. Here's what’s happened since the last roundup:

  • The U.S. and other APEC economies declared a new 'Global CBPR Forum.'
    • The announcement transforms the Cross-Border Privacy Rules framework from a regional mechanism to one open to any country with compatible privacy laws, joinable through a consensus process. The structures of CBPR and the related Privacy Recognition for Processors appear to stay the same: voluntary data protection frameworks with layers of accountability for certified businesses to demonstrate baseline interoperable standards when doing business across borders.
  • Senate Chamber of Commerce asked for more scrutiny (and delay) of FTC nomination.
    • As reported by Axios, the industry group wrote a letter to Senators asking that they first hold another hearing with Alvaro Bedoya before voting to confirm him as the fifth (tie breaking) Commissioner of the Federal Trade Commission. It has now been more than seven months since Bedoya was nominated to the vacant position.
  • Algorithms dominated the policy conversation, and the news.
    • The U.S. Department of Commerce announced the appointment of 27 civilians to a National AI Advisory Committee, part of the National AI Initiative. The initiative is focused on U.S. artificial intelligence leadership and competitiveness, though privacy considerations are a small part of its ambit (especially in the context of law enforcement practices). Meanwhile, it is hard to keep up with new scholarship on AI and ethical data processing. A series from MIT Technology Review questioned whether AI serves as a tool of colonial power. A New York Times magazine piece asked whether we should trust what natural language algorithms say. (And a thoughtful response from professor Emily Bender reminded us that the type of questions we ask about AI sometimes determine whether we get useful answers.) But forget AI speaking English, is algorithmic moderation changing how humans speak?
    • Do you have thoughts about the future of AI governance? Next week is your last chance to submit comments on NIST’s draft AI Risk Management Framework. And the U.S. Chamber of Commerce Technology Engagement Center is seeking comments to be incorporated into the recommendations of its Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, which is also hosting in-person hearings in Cleveland, Silicon Valley, and London in the coming weeks.
  • Youth privacy issues continued to make waves.
    • California legislators began debate on an Age-Appropriate Design Code, modeled after the U.K.’s law of the same name that governs safety and data practices for services affecting kids and teens, while a bipartisan group of U.S. state attorneys general wrote a letter to TikTok and Snapchat asking the companies to give parents more control and oversight of their kids’ experiences.
    • At the same time, a convening of industry stakeholders released an operational document to assist businesses in incorporating teen privacy and safety considerations into the design of new systems: A Roadmap for Considering Teen Privacy and Safety. The World Economic Forum also released an operational toolkit, Artificial Intelligence for Children, “to help companies develop trustworthy artificial intelligence for children and youth.”
    • If you need a foundation for these evolving issues, IAPP published a newly revised and expanded textbook, “Children’s Privacy and Safety” with Executive Editor Kalinda Raina.

 

Please send feedback, updates and calculus equations to cobun@iapp.org.