TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| AI vs. privacy: How to reconcile the need for sensitive data with the principle of minimization Related reading: Practical considerations for bias audits under NYC Local Law 144

rss_feed

""

It can feel like something of a catch-22. In the interest of good privacy practices, companies limit or avoid the collection of sensitive data, such as race or ethnicity, but then realize that without it, they are less able to engage in adequate bias testing. It is not unusual for us to begin artificial intelligence audits with corporate clients where their data scientists and lawyers are at a standstill over how to test their AI models adequately. 

For that reason, there appears to be an inherent conflict between the need to train and then test AI systems for bias — which requires the availability of sensitive data such as race, gender, age and other demographic categories — when often there is no other business need for such data, and companies complying with the standard privacy principle of collection limitation, also known as "data minimization," largely prefer not to collect or retain this type of sensitive data. In some cases, companies may be legally prohibited from doing so, encompassing federal and state restrictions in varying contexts. 

Awareness of the need for adequate bias testing data has been growing. For example, U.S. Federal Trade Commissioner Rebecca Kelly Slaughter noted in a recent paper that "A recent study found racial bias in a widely used machine-learning algorithm intended to improve access to care for high-risk patients with chronic health problems … The researchers who uncovered the flaw in the algorithm were able to do so because they looked beyond the algorithm itself to the outcomes it produced and because they had access to enough data to conduct a meaningful inquiry." (Full disclosure: One of us served as editor for this paper.) As audit requirements grow, both as a best practice and as a legal requirement, companies will experience more and more pressure to collect this type of data to adequately evaluate their AI systems. 

Unfortunately, advancing privacy legislation sometimes works at cross-purposes to this goal. The California Privacy Rights Act is one of the most stringent privacy laws in terms of data minimization requirements — emphasizing the need to limit unnecessary data collection and restricting data processing to a short list of accepted purposes. It also requires businesses to delete sensitive consumer data once it's no longer in use. How, then, can we reconcile the legitimate needs around governance for bias in AI systems with important privacy protections that focus on limiting data? 

There is more than one Fair Information Practice Principle

A critical first step is to acknowledge that data minimization is not the only, nor de facto the most important, privacy principle. Fairness, transparency and accountability are likewise critical values that apply to data practices generally and certainly are highlighted for AI and machine learning-based systems. None of these principles can be implemented in practice without sufficient data. While minimization is indeed a valuable protection, it cannot be prioritized at the expense of other potential harms. Some kind of balance must be found.

We have had clients who carried the data minimization standard to such an extreme that they didn't have sufficient data to successfully evaluate whether their model even performed its function accurately or reliably, much less whether it instilled or perpetuated bias. Fairness through "unawareness" doesn't work. Instead, we must accept that — in the language of CPRA — what is "reasonably necessary and proportionate to achieve the purposes for which the personal information was collected or processed" must eventually be seen to include ensuring the purpose is performed fairly and is monitored responsibly

This isn't a cop-out. As the U.S. National Institute for Standards and Technology recognized in their push for standards for managing bias in AI, technology is both created within and impacts our society, and as such, "the importance of transparency, datasets, and test, evaluation, validation, and verification cannot be overstated." Last year's draft of the Algorithmic Accountability Act approached this as well, requiring audits or other assessments for fairness and bias and requiring the FTC to create rules for impact assessments. Article 70 of the EU AI Act calls for "facilitating audits of the AI systems with new requirements for documentation, traceability and transparency" and recognizing the need for collection and confidentiality of data required for such audits.

Even the EU General Data Protection Regulation acknowledges that one possible exception to the restrictions on processing sensitive data is to "safeguard the fundamental rights and the interests of the data subject," while still requiring controllers to provide sufficient access and protections when they take steps to prevent errors, bias, and discrimination. 

What can companies do? 

There are a number of strategies companies may adopt to collect, generate or approximate sensitive data that can be used for evaluation purposes. 

  1. Collect data directly. In many instances, it may make the most sense to intentionally include the collection, handling and protection of sensitive data starting in the design phase. This doesn't mean such data has to, or should be, included in the training data for the model. But having it available will make subsequent testing and oversight tremendously more feasible and accurate. Even current restrictive guidance has allowances for this in some areas. As suggested in Slaughter's paper, creditors would be well-served to make use of the ECOA exception that permits the collection of demographic information to test their algorithmic outcomes. She notes that few creditors take advantage of this exception and speculates they fear that collection of the data will inflate claims that their decisions are biased. But as she points out, the collection of demographic data for the purpose of self-testing is not a sign of bias, as long as it is clear that the data is actually and only being used for that purpose. "Enforcers should see self-testing (and responsive changes to the results of those tests) as a strong sign of good-faith efforts at legal compliance and a lack of self-testing as indifference." 
  2. Generate intentional proxies. Models are known to "learn" bias aligned to race even in the absence of such data because of the strong correlations with other existing data. Rather than having this happen on its own, operators can intentionally and efficiently infer demographic data from the less-sensitive information they have on file. The most prominent method for this type of inference is known as Bayesian Improved Surname Geocoding, which has a long history in regulated areas such as consumer finance. BISG uses surnames and zip codes to infer gender and race or ethnicity. The Consumer Financial Protection Bureau has endorsed this approach — and endorsement from a major regulator helps establish legal defensibility should external scrutiny arise. The CFPB has even made its own code available on Github so others can use their methods. There are other variations that companies may explore as well, including a method that incorporates first names.
  3. Buy it. Another way to address missing demographic data includes looking to data brokers, public data or other data sets to which a company may have access in order to fill this gap. While this is a straightforward way to generate missing information, it obviously raises parallel concerns to ensure that the source, sharing and purpose limitation parameters align with applicable privacy policies. 
  4. Ask. Consent is a viable basis for collection and use of this data in many instances. And depending on the size and scope of the dataset, even having partial data fields for these sensitive categories may be sufficient for representative testing. In some cases, our clients have reached out to select sets of customers or users, explained why they need this sensitive information, and simply asked for it directly. 

In any discussion of privacy rights, it is also important to acknowledge the challenges around the various policy and regulatory requirements providing the "right to deletion." This is particularly thorny in the context of AI systems, where validly held data was used to train a system, but the data is subsequently deleted based on data subject or consumer request. There has yet to be a clear consensus on whether systems trained on such information should be impacted without continuing access to individual records. We will just reflect here that however that question is resolved will certainly include sensitive data, but is overall a bigger question to answer for AI broadly.

Generative AI  

We take a final moment to note that while these recommendations and lessons have been primarily approached and discussed in contexts applicable to traditional machine learning systems, they also apply to designing and performing evaluations of data collection and use for generative AI systems. It may take some creative thinking by lawyers and technologists working together to apply established standards to generative AI systems, but just as audits can be done for them, so too can measures and standards around fairness and bias be required.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.