There is a looming tension between achieving fairness and privacy in the operation of advanced technical systems.

Fair data outcomes necessitate the understanding of demographic disparities, but this often requires the collection or inference of personal data that can infringe on individual privacy. Balancing these equities, while navigating regulatory and ethical obstacles, has become a tall order for digital governance teams.

This tension is not merely a technical hurdle but a reflection of broader societal values — where the right to privacy and the pursuit of equity must coexist. It may be one of the most difficult challenges privacy professionals face as we strive to support the need to combat inequities in artificial intelligence and machine learning systems while also upholding our core privacy principles.

Asked to navigate this, we may even feel like Odysseus, caught between two grim hazards, but knowing the only way forward is through. Do we collect and manage demographic attributes to test for bias, thereby increasing privacy risk? Or do we ignore fairness and only maximize privacy, exposing our whole endeavor to catastrophe?

"The Odyssey, Book XII" puts it into perspective:

"Now wailing in fear, we rowed on up those straits, Scylla to starboard, dreaded Charybdis off to port, her horrible whirlpool gulping the sea-surge down, down, but when she spewed it up — like a cauldron over a raging fire — all her churning depths would seethe and heave — exploding spray showering down to splatter the peaks of both crags at once!"

This tension is far from theoretical — and at times, the choice may soon be made for us. In Colorado, for example, a bill is awaiting the governor’s signature that would create duties for developers and deployers to avoid algorithmic discrimination when AI systems are a "substantial factor" in making decisions about education, employment, finance, health care, housing, insurance, legal services or essential government services. Mitigation measures include testing for fairness and bias at all stages of development and deployment.

At the same time, privacy legislation continues to expand the protections due for sensitive categories of data, including at times demographic characteristics. While many legislative frameworks include exceptions for bias testing, not all have baked this in.

The path between these obstacles is narrow but passable. Lacking any immortal and far-seeing goddess to guide our path, we should instead start on our journey by listening to those who have navigated these shoals before.

In a new report, "Navigating Demographic Measurement for Fairness and Equity," the Center for Democracy and Technology AI Governance Lab offers a timely exploration of this intricate balance. Through the report, Miranda Bogen and her colleagues build on their real-world experience to provide a clear-headed deep dive into best practices for achieving robust fairness testing across a wide variety of technical conditions.

Although it also makes policy recommendations, the report is written primarily as a roadmap for practitioners, outlining methodologies for properly measuring and handling demographic characteristics through data and infrastructure controls, privacy enhancing methods and procedural controls. The report builds on many years of equally helpful, but much less accessible prior research, which has examined best practices in the U.S. government context and data-driven decision support, for example.

The meaty sections of the CDT report provide a survey of best practices for demographic testing, first focused on measurement and then on handling demographic data responsibly.

Measurement is focused on the data gathering stage, the approaches by which organizations may "obtain, observe, access, impute, or otherwise understand demographic characteristics, approximations, or patterns.” The report reviews five “prominent approaches to revealing disparities related to people." The approaches include:

  1. Direct collection from data subjects, which must take into account privacy, agency, and data quality concerns.
  2. Observation and inference.
  3. Proxies and surrogate characteristics.
  4. Auxiliary datasets can also be combined or compared with existing data to enable bias measurement.
  5. Cohort discovery.

The report provides a similar level of analysis in reviewing approaches for measuring demographic characteristics for representations of identity-related characteristics in datasets or generative system outputs. And, in case that is not enough, CDT also provides more detail for measuring disparities across contexts, using tools like synthetic data, exploratory analysis and qualitative research methods.

The final recommendations for handling demographic data should largely be familiar to privacy pros. CDT's report includes a review of methodological, technical and organizational guardrails practitioners should consider when integrating bias measurement among the purposes for which they process data. The recommended toolkit includes the following tools:

  1. Pseudonymization.
  2. Infrastructure controls, such as role-based or purpose-based access and use controls — which the report quotes one company as labeling "trust boundaries" — or making use of third-party data intermediaries, or even federated architectures.
  3. Encryption.
  4. Retention and ephemerality, including hard-coded retention limits, user control over previously collected demographic data, or ephemeral methods "where group estimations are made on the fly and immediately aggregated into summary statistics."
  5. Privacy-enhancing techniques, including aggregation, differential privacy and secure multiparty computation.
  6. User controls, like the ability to opt-out of demographic studies, though the report cautions that consent-based frameworks may be in tension with the need for bias mitigation. "The recently passed EU AI Act similarly acknowledged this limitation, clarifying that processing of special categories of personal data that is strictly necessary to ensure bias detection and correction in relation to high-risk AI systems is lawful under the legal basis of 'substantial public interest,' not just consent."
  7. Organizational oversight, such as cross-functional committees, multistakeholder engagement or board-level efforts. I would add to this list the role of independent accountability mechanisms, which could conceivably be designed to deliver uniform trusted standards in bias testing — perhaps in combination with a role as a data intermediary.
  8. Privacy impact assessments are also recommended as a helpful tool to evaluate whether methods for measuring and handling demographic data sufficiently mitigate against risks.

Apart from the useful roadmap for AI governance professionals, the report is potentially helpful for policymakers who are considering how best to balance the many equities involved in large data sets and algorithmic processing. Among CDT's direct recommendations is a reminder that "agencies and regulators should expect organizations to make reasonable efforts to conduct algorithmic impact assessments and engage in non-discrimination efforts, particularly in consequential contexts."

It is clear there is much still to learn about best practices in this space. Hopefully, organizations will continue to find new ways to navigate these emerging hazards in ways that preserve autonomy, privacy and fair outcomes.

Along the way, as the report reiterates, transparency about these efforts is vital for building trust with regulators and consumers. Clear communication about methodologies and mitigation strategies also helps the emerging AI governance profession create uniform benchmarks. We may be navigating treacherous waters, but we should not be expected to go it alone.

Upcoming happenings:

  • 21 May, 17:00 ET: The IAPP's D.C. KnowledgeNet hosts a panel titled "U.S. State Privacy Laws: Compliance in an Evolving Landscape."

Please send feedback, updates and impossible choices to cobun@iapp.org.