TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

""

Imagine you are baking a cake for an upcoming holiday party. The recipe calls for butter, and luckily you have just enough for the recipe. The next morning, you go to break your overnight fast with buttered toast. Alas, you remember there is no butter left. You stare at the cake, wondering if it's possible to somehow extract some or all of the butter you used last night to lather onto your toast. You quickly realize how futile the task will be. The butter has melted and seeped into the whole cake, morphing into a product that is different than just the sum of its parts. Perhaps, instead, it'll be cake for breakfast.

What if artificial intelligence models and other algorithms were trained on personal information? Once a model has been trained, it cannot simply be wound back to its prior training data. Consequently, it can be incredibly difficult to remove the distinct effects of the data after the model has been trained. The model is, for all intents and purposes, baked.

The potential harms of AI models, however, could be far worse than dry toast or, indeed, cake for breakfast: discrimination, bodily injury and privacy violations are all possibilities. Furthermore, U.S. agencies have begun to use model disgorgement as a remedial mechanism for privacy violations. The explosion of AI technology has led to development in the model disgorgement space. With this in mind, it is important for both privacy and AI governance professionals to examine and understand the current state of model disgorgement.

Understanding model disgorgement and destruction

Model disgorgement — also known as algorithmic disgorgement or algorithmic destruction — refers to the deletion or destruction of models and algorithms. It is usually performed to remove "bad" data from a model, when bad data is any data that was illegally obtained, used without consent, found to be invalid or otherwise impacts the validity and/or legality of model. While the terms destruction and disgorgement may be commonly used interchangeably, it is important to distinguish them:

  • Destruction: The entire model is destroyed or otherwise unusable, and a new model must be made entirely from scratch.
  • Disgorgement: The offending training data and its effects are removed from the existing model, making it as if that data was never used in the first place.

Since the process of model disgorgement can be difficult, or even impossible, depending on how the model was built, an order for disgorgement can be a de facto order of model destruction. This is a costly burden considering the amount of resources needed to fully train a model. The surge of generative AI models that collect training data from all corners of the internet has further articulated the need for an approach that is less blunt than total destruction.

Model disgorgement techniques

One of the leading ways to view emerging disgorgement techniques is through a taxonomy developed by Amazon Web Services AI researchers. The taxonomy is split along two axes: one is based on when the technique can be utilized in the model training process (reactive, proactive or preemptive) and the other on the "provability" of the removal of the effects of the training data (deterministic or probabilistic).

Retraining (reactive, deterministic)

Retraining is the process of completely retraining a model after removing the offending data. As mentioned previously, this is increasingly becoming an inviable method as neural networks grow in size and, consequently, necessary resources. The resulting model can also be incompatible with the implementation of the old model, as each retraining instance can produce different behavior.

Selective forgetting/unlearning (reactive, probabilistic)

Instead of removing the offending data and retraining the model, machine unlearning removes the effects of said offending data on the model. This technique relies on a statistically provable assertion whereby the effect of the training data is translated into some quantifiable number, which is reduced to almost zero after the unlearning process takes place. This is a novel area of research, though there are signs of progress as it relates to the efficiency and development of different deletion algorithms.

Compartmentalization (proactive, deterministic)

Rather than training a model with an entire dataset, compartmentalization uses subsets of the dataset to train smaller versions of the model, which can eventually be combined to create an outcome similar to that of one large model. In this case, only the small models in which offending data was used need to be deleted and retrained.

Dataset emulation (preemptive, deterministic)

Dataset emulation is the process of training on synthetic data instead of a real dataset. Synthetic data is typically generated in the style of a certain dataset but lacks identifiers or other key markers that may cause privacy violations or other unwanted outcomes. This is a particularly active development space as the use of synthetic data becomes more and more desirable.

Differential privacy (preemptive, probabilistic)

Differential privacy  mathematically guarantees an algorithm's output will be virtually the same, regardless of the addition or subtraction of other training data. In other words, it becomes hypothetically impossible to determine whether a data subject's data was used to train a model, thus protecting their privacy. This is also still a burgeoning area of research.

Many of these concepts, as well as other important AI terms, are defined in the IAPP Key Terms for AI Governance resource.

Note that the above list from AWS AI researchers is not exhaustive and but is one — albeit very illustrative — way to view disgorgement techniques despite their rapid development. These techniques are not necessarily mutually exclusive either. For example, in 2020 researchers introduced a framework that incorporates properties from both unlearning and compartmentalization techniques. Alternatively, one could use differential privacy techniques to preserve privacy while simultaneously using synthetic data to check for biases and other unintended effects in their model. The industry seems eager for further development in these areas, as evidenced by events such as Google's first ever Machine Unlearning Challenge.

Many privacy professionals will recognize that several of the above techniques involve good data privacy practices; both differential privacy and synthetic data are privacy-enhancing technologies covered by the IAPP. Even at a foundational level, when actual model disgorgement may seem out of reach, established data privacy practices can put you on a path forward should the need ever arise. Data provenance is crucial. After all, if you do not have a documented trail of where training data was sourced from, where it's stored and what it's being used for, you have essentially tripped at the starting line. Paired with an effective data governance program, these practices provide structures on which responsible AI modelers can rely.

Model disgorgement as a legal remedy

If model disgorgement is such a novel concept and is so difficult to execute, why bother to think about it? At least in the U.S., model disgorgement has become a cudgel of the U.S. Federal Trade Commission under its Section 5 authority to enforce prohibition on "unfair or deceptive acts or practices." As Jevan Hutson and Ben Winters describe in their legal analysis, the FTC has ordered model disgorgement in five separate instances:

  1. In the Matter of Everalbum
  2. In the Matter of Cambridge Analytica
  3. USA v. Kurbo Inc. and WW International
  4. FTC v. Ring
  5. USA v. Edmodo

In each case, user data was grossly misused or inappropriately collected in part to train algorithms and models in furtherance of product goals. Models are a large investment for organizations, so disgorgement translates into serious financial penalties and material losses.

It's important to note that for the U.S. federal government, disgorgement as a general remedy is not new. The Securities and Exchange Commission has a history of reclaiming profits made as part of financial crimes. The FTC simply expanded this view to encompass consumer data. As former FTC Commissioner Rohit Chopra said in a statement, Everalbum was forced to "forfeit the fruits of its deception." Thus, organizations operating within the U.S. should continue to ensure proper consumer data use, especially when it is used to train algorithms or models. Outside the U.S., organizations should be prepared for local legislation that includes model disgorgement as a remedy, especially as AI adoption and integration continues to expand.

Model disgorgement as a right

When used as a tool by the FTC, model disgorgement should not be viewed purely as a form of punishment. One could also argue, as an extension of data subject rights laws, that users should have the right to remove their data and its effects from an AI model, and that disgorgement is a corrective remedy, rather than or in addition to potentially being punitive order. Some research ties this directly to the right to be forgotten embedded in many privacy laws, like the U.K. and EU General Data Protection Regulations. Ideally, novel model disgorgement techniques will provide a way for organizations to easily service data subject requests in a way that is not entirely destructive to the model as a whole.

Looking forward

Model disgorgement sits at the confluence of emerging and escalating privacy and AI governance challenges. Since AI models will only continue to proliferate and become more complicated and integrated, it's imperative that organizations understand how model disgorgement may affect them. Organizations that have established privacy programs with good data governance will be well positioned to implement novel model disgorgement techniques as they become available. Additionally, regulators will almost certainly look more favorably upon organizations with these structures. Chances are organizations with effective privacy programs are less likely to grossly misuse consumer data and attract the ire of the FTC.

As disgorgement techniques develop, organizations may find them increasingly useful to implement. AI legislation and guidance from regulators may offer insights or instructions on utilizing model disgorgement. Thus, both privacy and AI governance professionals should be mindful of this nascent, yet necessary, concept.

Key Terms for AI Governance

This glossary provides definitions and explanations for some of the most common terms related to AI governance.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.