TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: Should ChatGPT slow down? Related reading: A view from DC: A recap of the TikTok hearing

rss_feed

""

It may not be a viral dance move yet, but the latest hot trend in tech circles is to call for a slowdown on artificial intelligence development. This week, an open letter from the "longtermist" Future of Life Institute called for a six-month pause on the development of AI systems such as large language models due to concerns around the evolution of “human-competitive intelligence” that could bring about a plethora of societal harms.

Scholars agree that caution in the development of advanced algorithmic tools is essential. But many of the hipsters of AI prudence — those who have been asking for the embrace of mindful practices and guardrails since before it was cool — do not look favorably on the open letter. Even as they also encourage more responsible innovation and deployment of AI systems, they remind us to avoid falling for hype and exaggerated claims about AI’s near-term capabilities.

In a post on the blog AI Snake Oil, Sayash Kapoor and Arvind Narayanan analyzed the major claims and solutions raised in the open letter:

“We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate.”

Others agree. On her own blog, author of the widely cited critique of language learning models On the Dangers of Stochastic Parrots, Emily Bender pushed back on the notion that AI harms are driven primarily by the growing size or perceived intelligence of these systems. Though she agrees with many of the policy goals expressed in the letter, she claims the real risks and harms are more “about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”

Nevertheless, building on the open letter, the Center for Artificial Intelligence and Digital Policy delivered a complaint to the U.S. Federal Trade Commission calling for the FTC to investigate OpenAI’s practices and ensure the company follows guidance and principles around the deployment of AI systems. Unlike the open letter, the complaint highlights some specific privacy concerns, including the alleged mishandling of user chat histories, which could imply security vulnerabilities.

Of course, the privacy risks of generative AI are not limited to basic security vulnerabilities. My colleague, Katharina Koerner, CIPP/US, recently published an overview of privacy considerations in the development and deployment of generative AI systems.

To my mind, the data privacy impacts of generative AI systems fall into two primary buckets.

First is the possibility that personal data is caught up in the data sets used to train AI models, whether these are based on publicly available data or privately held data sets. Today, Italy’s data protection authority announced an investigation into OpenAI around questions of the company’s legal basis for such processing.

Second is the collection and use of personal data through user interactions with generative AI systems. This is one area where privacy professionals can have an immediate impact, as large language model-based services are designed. Future of Privacy Forum's Stephanie Wong analyzed some considerations in the workplace.

There is a wide spectrum of practices one can imagine for how such systems are designed. Are user chat histories siloed or comingled? Are they ingested into the model for iterative training or not? Are plug-ins and add-ons allowed that could ingest or expose users’ communications, contacts or other potentially sensitive files? These are classic privacy questions that are not pre-determined by the functions of a language learning model.

As always, when we encounter new risks from new technologies, we should return to fundamental principles of privacy. Earning the trust of the users of these tools means setting proper expectations about their utility — and their limits — while remembering the hard-fought privacy lessons we have already learned.

Here's what else I’m thinking about:

  • The California Privacy Protection Agency’s final rules were approved. If you, like me, have been waiting to dig deep on the substance of the agency’s first-round of CCPA implementation rules, you have now officially run out of excuses. The rules flesh out and smooth over the rough edges of California’s privacy law. Compliance with the sections of the CCPA addressed by these rules is immediately subject to these clarifications. In some places, the rules significantly impact interpretations of the law. For example, Section 7002 provides a multifactor test by which the CPPA will measure whether processing purposes are consistent with consumer expectations. Such enhancements are worthy of further analysis and detailed comparison with other frameworks. Meanwhile, as FPF’s Keir Lamont reminds us, California’s second round of rulemaking is off to the races.
  • As the sixth U.S. state privacy law was signed, analysts seem to agree that it rivals Utah as the most narrow and limited state consumer privacy law. Whether this signals more or less pressure for a comprehensive federal law is a subject of more debate. IAPP’s Anokhy Desai analyzed the legal contours of the new Iowa law.

Upcoming happenings:

  • 3 April at 20:00, the Center for Democracy and Technology hosts its inaugural Spring Fling (Hotel Monaco).
  • 3-5 April, IAPP hosts the Global Privacy Summit (Convention Center).
  • 6 April at 10:00, the Information Technology and Innovation Foundation hosts a virtual webinar, “What are the consequences of backdoors for online privacy?”

Please send feedback, updates and stochastic parrots to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.