TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: The professor and the politicos Related reading: A view from DC: US Senate is back in session, tackling AI

rss_feed

""

After an action-packed week of hearings, conferences and closed-door meetings in Washington, D.C., it can be a valuable exercise to slow down and narrow in on a few important details. Since the season of smelling the roses is over, I find myself instead stopping to listen to Congressional testimony.

Alongside the wide variety of brilliant advocacy and entertaining repartee on display this week, one set of testimony stands out for closer investigation and analysis. Boston University School of Law professor Woodrow Hartzog, one of the U.S.'s most influential privacy scholars, delivered timely and targeted warnings to the U.S. Senate Subcommittee on Privacy, Technology and the Law, part of the Committee on the Judiciary.

Appearing next to Microsoft's Brad Smith and NVIDIA's William Dally, the professor called for more than self-regulation, reminding lawmakers that without a backdrop of legal protections, the spread of ethical principles and transparency markers represent merely "half-measures."

The hearing was just one of a pair of simultaneous fora, which I previewed last week, and IAPP's editorial team covered in depth. It was led by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., who took the opportunity to release a one-page bipartisan framework describing the various legislative interventions on AI that they would both support, at least in principle.

Hartzog asked Congress to go further, presenting a people-centered perspective on technological governance throughout the proposed solutions highlighted in his prepared testimony.  

First, Hartzog reminded policymakers that AI can never be neutral. Like all technologies, it does not exist outside of the messy world of the humans who design it. This perspective asks regulators to consider more than how a system is used.

"Values are deeply embedded into the design of technology human beings build. Every technology sends signals to people and makes a certain task easier or harder," Hartzog wrote. Designers of AI systems can be viewed by the law, whether through new or old regulations, as contributing to later harm. This is just as true in the tortious context of "defective design" as in the consumer protection lens, where developers could be viewed to provide the means and instrumentalities of unfair or deceptive systems.

Relationships between people — the roots of power dynamics — are the crux of Hartzog's second argument.

In simple terms, AI systems bestow power, and imbalances of power underlay the form these systems can take. "Since so many risks of AI systems come from within relationships where people are on the bad end of an information asymmetry," he argued. "Lawmakers should implement broad, non-negotiable duties of loyalty, care, and confidentiality as part of any broad attempt to hold those who build and deploy AI systems accountable."

To the same end, Hartzog also supports strengthening existing consumer protection enforcement mechanisms as well as creating new bright-line rules for AI, such as those proposed in the recent Zero Trust AI Governance report published by Accountable Tech, the AI Now Institute, and EPIC.

Finally, Hartzog asked policymakers to "resist the inevitability narrative" around AI. Here, too, there is a forgotten element of humanity behind the innovation buzzwords.

Humans innovate technical solutions in the directions they choose to prioritize — tech does not spring to existence in a vacuum. If we are solving only for progress and convenience, other values may be left by the wayside. Even if developers design perfectly unbiased systems, Hartzog warned, they can be used to "dominate, damage, misinform, manipulate, and discriminate."

Similarly, he said if regulators jump "straight to putting up guardrails," they "fail to ask the existential question about whether particular AI systems should exist at all, and under what circumstances it should ever be developed or deployed."

Of course, Hartzog is not the only scholar thinking about the proper contours of governance over complex sociotechnical systems like those fueled by AI. For a helpful reading list, one need look no further than the detailed citations in his prepared testimony, which point to recent scholarship across domains. There is no shortage of ideas about governing AI, just as there is no shortage of governors who would seek to consider it.

Despite the flurry of activity around AI, we are unlikely to see coordinated legislation at the U.S. federal level this year. You don't have to take my word for it. This is according to one of the co-chairs of the Senate's AI caucus. "I would like to have something that we can pass in this congress," Sen. Martin Heinrich, D-N.M., told Leigh Ann Caldwell on Washington Post Live. "Some people have said by the end of the year. I don't see things coming together that quickly, but I do think we could see a package in the following year."

A package of what is still anyone's guess.

Here's what else I'm thinking about:

  • A warning for people-search companies: If it looks like a credit report, it must follow credit report rules. The U.S. Federal Trade Commission reached a settlement with a handful of related companies, including Instant Checkmate and TruthFinder. The case is a shot across the bow for companies that have looked to avoid compliance with the Fair Credit Reporting Act, which proscribes data protection rules around background checks. The deception claims in the case also serve as a reminder of manipulative design practices to avoid, as well as the importance of accuracy as a foundational privacy principle. More analysis to come.
  • California's Delete Act passes legislature, one step away from becoming law. California Senate Bill 362 cleared the California State Legislature 14 Sept. with Senate concurrence on Assembly amendments. Policy wonks were watching the progress of the bill, which proposes a uniform deletion request system for consumers to delete their data from "data brokers." You can see a comparison of the Senate and Assembly versions of the bill here.

 Upcoming happenings:

Please send feedback, updates and late-blooming roses to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.