TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Daily Dashboard | A view from DC: We need zero trust for privacy and AI Related reading: A view from DC: Connecting the dots: The Global CBPR Forum grows

rss_feed

""

""

""

Inside our thick-walled tech policy bubble, people talk a lot about trust. Policymakers are told, repeatedly and at length, consumer trust is essential for a thriving digital economy. Trust is required for the widespread adoption of new technologies. It is even necessary for a functioning democracy.

All of this is self-evident. But often, it seems consumer trust is spoken of as though it is the end goal for good public policy, a fountain of youth that, once discovered, unlocks a new world of innovation and prosperity. Such a framing can be distracting and even counterproductive. Trust is not a goal in and of itself. It is a by-product. Trust in technology emerges when we build a digital ecosystem where consumer expectations fit reality, where widespread harms are mitigated and where consumers can accurately measure any differences in the riskiness of systems they choose to use.

Most agree that part of the role of government in an area of the law like consumer protection is to secure baseline protections that prevent widespread harms. For example, I know when I purchase a car that it will meet minimum standards for quality, safety and environmental impact. If it turns out to be a faulty vehicle, I know my local lemon law will ensure the manufacturer replaces it. But above this baseline, vehicle manufacturers work hard to earn my trust. Though they might try to buy it, they won't succeed. Instead, earning trust is most effective when trustworthiness is demonstrated — car ads are sprinkled with metrics about independent tests, quality assurance indicators, satisfaction metrics and government ratings. Thanks to the profusion of outside indicators of quality and safety, on top of baseline government protections, vehicle manufacturers realize that investing in quality pays dividends in trust.

In this way, trust emerges when it is questioned rather than assumed. Baseline trust in a technology may require government assurance. Above this, consumers must be given the tools they need to make informed decisions about the products and services they use. This is as true for privacy as it is for cars. The more we establish independent and verifiable indicators of data protection practices, the more we empower consumers to put their trust only where it is deserved. Part of this picture involves trained and credentialed privacy professionals who will be able to keep up with ever-evolving best practices. But learning from the security field, we should also embrace a culture of zero trust — demanding transparency, validation, and metrics at every opportunity, and working together to build mechanisms that can demonstrate trustworthy privacy practices.

The tricky part, of course, is getting this balance right. In the privacy world, what are the minimum standards that should be assured by government and other accountability mechanisms, and what are the areas where diverging privacy preferences should be respected? Which technologies are so essential for our everyday life — or so capable of affecting our livelihood — that they deserve heightened baseline protections?

We see these questions reflected in the surging policy debate around building responsible — and trustworthy — AI systems. In fact, trustworthiness is a major theme in the AI Risk Management Framework, published this year by the National Institute for Standards and Technology. As NIST explains, "The current lack of consensus on robust and verifiable measurement methods for risk and trustworthiness, and applicability to different AI use cases, is an AI risk measurement challenge." To begin to solve this challenge, the framework describes characteristics of trustworthy AI systems, including:

  • Safe.
  • Secure and resilient.
  • Explainable and interpretable.
  • Privacy-enhanced.
  • Fair, with harmful biased managed.

In the graphical version of the list, NIST displays these five characteristics sitting on top of a "valid and reliable" baseline because, as they explain, validity and reliability are necessary conditions of trustworthy AI systems. Next to this sits a vertical box with two more characteristics, which relate to all the others: accountability and transparency.

In this way, NIST reminds us that we must work to find ways to enhance transparency and accountability at every opportunity, including in demonstrating the privacy practices we bake into automated systems. Even as we wrestle with building these best practices, policymakers are considering general rules for AI that could bake them into law.

At the same time, in the U.S. federal conversation, agencies are working hard to remind companies algorithmic systems are subject to many existing technology-neutral legal protections. This week, in a display of interagency coordination, watch dogs from across the federal government released a joint statement on enforcement efforts against discrimination and bias in automated systems. The statement did not provide any new information, but rather served as a reminder of the U.S. government's commitment to ensuring that AI is subject to the existing legal protections for consumers, workers, tenants, borrowers and voters. It highlighted ongoing initiatives from the Consumer Financial Protection Bureau, the Department of Justice's Civil Rights Division, the Equal Employment Opportunity Commission and the Federal Trade Commission.

Oversight like this is one part of a healthy zero-trust approach to governance. Only by embracing transparency and accountability and questioning ourselves and each other will we continue to build a trustable digital ecosystem.

Here's what else I'm thinking about:

  • Mind the gaps in sectoral privacy laws. Such was the theme and the takeaway of this week's hearing in the Subcommittee on Innovation, Data, and Commerce of the House Energy and Commerce Committee. This was the sixth privacy hearing of the year, designed to continue laying the foundation for the reintroduction of the American Data Privacy and Protection Act. Amelia Vance of the Public Interest Privacy Center and Morgan Reed of ACT (The App Association) were joined by representatives from Salesforce and REGO Payment Architectures. In their written and oral testimony, the witnesses helped the committee dive deep into the weeds in analyzing how a comprehensive consumer privacy law like ADPPA would fill in the holes in sectoral protections, including those covering kids, students, financial and health data. Adding to the conversation from the sidelines, EPIC posted a helpful analysis and infographic showing how existing protections in the U.S. are indeed "full of holes."
  • The White House convened a bipartisan group of state legislators to discuss strengthening protections against nonconsensually distributed intimate images, which are prohibited under law in 48 states. According to the press release, at the convening, legal experts underscored that the most effective laws define the offense without a motive requirement and apply to both threats and actual acts of nonconsensually distributed intimate images.
  • The governor of the state of Washington signed the new My Health, My Data Act into law. To help privacy professionals prepare for this broad consent-focused framework, IAPP’s Westin Research Center published a detailed legal analysis. If 3,000 words on MHMDA isn't enough and you want to dive even deeper into the implications of this watershed law, I highly recommend reading the analysis by FPF's Felicity Slater (look for the linked policy brief) and the ever expanding many-part blog series from the team at Hintze Law, who most recently unpacked the implications of MHMDA for biometric data.
  • In California, the second tranche of public comments are now posted. Comments from across the policy world respond to the California Privacy Protection Agency’s ongoing rulemaking on cybersecurity audits, risk assessments and automated decision-making.

Upcoming happenings

  • 2 May at 12:00 EDT, IAPP hosts a LinkedIn Live on "the sweeping scope of Washington's My Health, My Data Act" (virtual).
  • 10 May at 13:00 EDT, FPF hosts a ceremony for its third annual Research Data Stewardship Award (virtual).

Please send feedback, updates and healthy skepticism to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.