ANALYSISMEMBER

Privacy engineering for AI machine learning: Addressing algorithmic disgorgement risks during product development

Published
Subscribe to IAPP Newsletters

Contributors:

Lisa Nee

CIPP/E, CIPP/US, CIPM, CIPT, FIP

Senior Counsel

Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

In today's regulatory environment, the legal risk of algorithmic disgorgement — the deletion of algorithms developed using illegally collected data — makes privacy consultations a necessary and strategic requirement of the artificial intelligence development life cycle.

During privacy consultations for machine learning AI, engineers are already considering a body of technical methods that could address the risk of algorithmic disgorgement: machine unlearning.

Unlearning for machine learning

AI engineers commonly implement "unlearning" techniques for non-privacy purposes, including resuming training from where it left off following a crash or timeout, analyzing a model's performance at different stages or fine-tuning pre-trained AI models at intermediate stages.

AI checkpoints. These saved snapshots of a machine learning model's state during training act like a pause button, allowing developers to save the model's then-current progress. Because AI checkpoints enable selection of the best outcome model prior to a full-blown deployment, they are commonly used for inferences engineers or any machine learning AI used to either make predictions or generate outputs from new data.

AI checkpoints are typically saved at regular intervals during training or when certain performance milestones are achieved. Full AI checkpoints save the model's entire state at a point in time, including architecture and data. Partial AI checkpoints save only such things as a machine learning model's weights and parameters — meaning in AI and machine learning models, the numerical values that determine how strong connections are between nodes making up the neural network and the learnable elements of a model, respectively.

Contributors:

Lisa Nee

CIPP/E, CIPP/US, CIPM, CIPT, FIP

Senior Counsel

MEMBER

Unlock this exclusive content and more

Join the IAPPAlready a member? Sign in

Membership opens up a world of resources

In-depth knowledge

From original research reports and daily news coverage to legislative trackers and infographics, we have the information you need to stay ahead of change.

A global network

Make valuable professional connections through more than 160 local IAPP KnowledgeNet chapters in 70 countries.

Access to the experts

Connect with top thinkers in privacy, AI governance and cybersecurity for fresh ideas and insights.

Learn what you get from membership