TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: Can the private sector catch up with government privacy assessments? Related reading: A view from DC: What if we had no word for privacy?

rss_feed

Like any good bureaucrat, if you work as a privacy pro within a federal agency, you are familiar with a mystifying assortment of acronyms. When your agency is considering using a new technical system, you know you will need to complete a PTA about whether the system will collect PII, which will determine whether you must fill out a PIA or publish a SORN. Soon, you may also have to conduct an AIIA or publish an AIUCI.

Stick with me; I promise these acronyms matter.

The series of privacy steps that agencies undertake are not an empty exercise in paperwork. In fact, with some Orwellian irony, some of these requirements have their roots in a U.S. law called the Paperwork Reduction Act. More accurately, it was the E-Government Act of 2002 that brought to full maturity the transparency and accountability requirements of government privacy programs.

Before 2002, agencies were required to publish official system of records notices in the Federal Register, a requirement dating all the way back to the Privacy Act of 1974. These notices explain the legal authority a government entity has to collect, process and retain personal information. Their public nature makes them a key tool for transparency and accountability. A "system of records" is essentially an antiquated term for a database.

When compared with private-sector practices, SORNs look a lot like the foundations of a robust data mapping program — albeit one that must be made available for public scrutiny. For example, take a look at the U.S. Federal Trade Commission's detailed list of all systems of records that it maintains, which effectively draws a map of data systems held by the agency, or at least those that touch any personally identifiable information.

After 2002, agencies were also required to conduct privacy impact assessments, which should be a familiar term of art for privacy pros everywhere.

PIAs are a more flexible and detailed tool for fully documenting risks and safeguards related to any type of technical system that could process personal data, even if it is not part of a large, structured database. They complement SORNs by providing additional granularity about all sorts of data practices. Over the years, the Office of Management and Budget has refined the standards for conducting PIAs, most recently in its transformative 2016 revision to Circular A-130.

As an example of the ubiquity of these assessments, take a look at the Internal Revenue Service PIA about just one Facebook page that it maintains, through which the agency correctly predicts it might collect personal information via interactions with individuals' social media accounts.

As with any good governance process, there is always a step before documentation begins, when you determine which parts of the process will be required. That's where the privacy threshold analysis comes in. This page from a sub-agency of the U.S. Department of Agriculture explains how PTAs help the agency incorporate privacy principles into any new or modified program, providing a place to document whether a PIA or SORN — or both — will be required.

Helpfully, the agency lists every PIA and every SORN that it has conducted, all of which are made publicly available for inspection.

Private-sector entities are hopefully already conducting similar processes as part of a robust privacy governance program, even if they are not obligated to make such documentation publicly available.

Now, policymakers and bureaucrats alike are faced with a new question: How do the rise of advanced technical systems, like those powered by machine learning or artificial intelligence, complicate the existing processes for providing transparency and redress for their processing of personal data?

One consideration laid out in the recent Executive Order 14110 is whether AI governance processes should be appended to existing privacy processes within federal agencies. As it happens, this is the same question many private sector organizations find themselves wrestling with.

To help answer this question, OMB is seeking public comment on PIAs with a deadline of 1 April.

At the same time, the federal government — just like many private sector organizations — is realizing the need to create new governance processes for AI systems separate from privacy processes, to help incorporate other considerations and equities like bias and discrimination into data governance practices. This is where AI impact assessments come in.

The OMB's draft guidance on AI strengthens the governance processes for certain AI systems, which echo in many ways the privacy transparency requirements above. The OMB already requires publication of annual AI use case inventories by federal agencies. This year, agencies will need to "identify and report additional detail on how they are using safety-impacting and rights-impacting AI, the risks — including risks to equity — that such use poses (and) how they are managing those risks."

For those new categories of high-risk systems (safety-impacting and rights-impacting AI), OMB will also require agencies to conduct an AI impact assessment to document the system's purposes, risks, and safeguards. For now, this appears to be an internal documentation requirement, which each agency must be prepared to provide to OMB, though the draft guidance considers the possibility that agencies will include these assessments in their annual AI use case inventories.

There is much the private sector can learn from these AI governance innovations, but also a lot still to be decided. For example, in its request for information on PIAs, OMB asks for input on how AI risk assessments will relate to potentially enhanced rules for PIAs. Are these parallel but separate processes? Are they all part of the same governance conversation?

We are asking much the same question as we grapple with the role that privacy professionals play in the emerging profession of AI governance. There are as many complementary factors as there are distinctions between the domains. But it is clear privacy pros and privacy processes will continue to be part of the solution to mitigating AI risks, even as we work to uncover the rest of the picture.

Here's what else I'm thinking about:

  • New Jersey is the lucky 13th state to see a comprehensive consumer privacy law, and it comes with some new quirks. Unlike New Hampshire, soon to be number 14, which has very few distinguishing qualities when compared to other state laws, New Jersey presents some twists for privacy professionals. These include broad sensitive data definitions, possibly the most stringent set of opt-in requirements for teen data, a strong right to delete, and rulemaking authority for the attorney general. The Future of Privacy Forum's Jordan Francis provided a helpful analysis of the new law's distinguishing characteristics.
  • Meanwhile, an important critique from CDT on the failure of U.S. states to create true data minimization rules. CDT co-Director of the Privacy and Data Project Eric Null writes that state consumer privacy laws are merely perpetuating a notice and choice regime and "letting us down on data minimization. The concept has been co-opted at the state level to mean something more like companies cannot collect data for any purposes for which they do not inform the consumer." After analyzing existing state codes, Null calls for states to reexamine their approach, either by prioritizing a different style of new law like those pending in Massachusetts and Maine, or by fixing existing codes with stronger baseline restrictions on data collection and use.
  • Not to be outdone, today Electronic Privacy Information Center released a report that analyzes each of the 14 state comprehensive consumer privacy laws in play. The group built a rubric to assign a letter grade to each law based on whether it includes the provisions that would "adequately protect consumers online." No state received an "A."
  • As the FTC has said before, failure to embrace data minimization is a security and a privacy threat. In a new order against Blackbaud, the agency is reiterating this lesson. Though the alleged failings of the company are so numerous as to enable the FTC to call its data security practices "shoddy," the agency goes out of its way to mention data retention practices among the company's shortcomings — and the company must now delete reams of sensitive data it improperly held. This could be a preview of how data minimization will feature in the next iteration of the FTC's rulemaking, expected this quarter.

Upcoming happenings:

  • 7 Feb., 13:00 ET: the Information Technology Industry Council hosts The Intersect: A Tech + Policy Summit (Capital Turnaround).
  • 15 Feb., 11:00 ET: IAPP hosts a LinkedIn Live conversation, Pay or OK: Practical Considerations for AdTech and Beyond.

Please send feedback, updates and governance workflows to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment Annie Bai • Feb 9, 2024
    Excellent article, thank you for compiling all this info. I agree that government PIAs are a robust process and as a service provider, it was a smooth process to participate in a PIA on my company. Hopefully the AI risk process will be similarly efficient and transparent.