Privacy engineering for AI machine learning: Addressing algorithmic disgorgement risks during product development


Contributors:
Lisa Nee
CIPP/E, CIPP/US, CIPM, CIPT, FIP
Senior Counsel
Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
In today's regulatory environment, the legal risk of algorithmic disgorgement — the deletion of algorithms developed using illegally collected data — makes privacy consultations a necessary and strategic requirement of the artificial intelligence development life cycle.
During privacy consultations for machine learning AI, engineers are already considering a body of technical methods that could address the risk of algorithmic disgorgement: machine unlearning.
Unlearning for machine learning
AI engineers commonly implement "unlearning" techniques for non-privacy purposes, including resuming training from where it left off following a crash or timeout, analyzing a model's performance at different stages or fine-tuning pre-trained AI models at intermediate stages.
AI checkpoints. These saved snapshots of a machine learning model's state during training act like a pause button, allowing developers to save the model's then-current progress. Because AI checkpoints enable selection of the best outcome model prior to a full-blown deployment, they are commonly used for inferences engineers or any machine learning AI used to either make predictions or generate outputs from new data.
AI checkpoints are typically saved at regular intervals during training or when certain performance milestones are achieved. Full AI checkpoints save the model's entire state at a point in time, including architecture and data. Partial AI checkpoints save only such things as a machine learning model's weights and parameters — meaning in AI and machine learning models, the numerical values that determine how strong connections are between nodes making up the neural network and the learnable elements of a model, respectively.
Contributors:
Lisa Nee
CIPP/E, CIPP/US, CIPM, CIPT, FIP
Senior Counsel