The privacy implications and questions surrounding artificial intelligence dominate discussions among many privacy professionals. How do we untrain an AI model previously trained on personal information in response to a data subject request? How do we explain how a particular AI model processes personal information in our privacy notice? What role does the privacy team play in AI governance? How do we secure our legitimate interests or consumer consent to process data in an AI model, and what do we do if a consumer withdraws consent?
These are valid questions on which volumes have been written, with more to come. That said, AI is not only an object to be governed or privacy problem to be solved; rather, we must consider how we can use AI in privacy program operations and management.
Let us start by addressing the elephant in the room.
As with the industrial revolution of the 19th century, the AI revolution could completely restructure the workforce across thousands of industries. If we are completely honest with ourselves, anyone considering how to use AI in any industry will likely grapple with self-interests threatened by the question: "will AI replace me?"
The ramifications of that question are psychologically complex. Our society must come to grips with them as we continue to explore, discover and advance AI. Doing so within the privacy profession is a topic for perhaps another article, or at least personal introspection and development. If we are to objectively explore the use of AI in privacy operations, we must at least acknowledge the natural fears and insecurities these questions present.
Next, let us evaluate what kinds of AI are at our disposal.
Two nascent categories of AI tools are generative AI and predictive AI. A recent eWeek article parses out the differences well. "At their foundation, both generative AI and predictive AI use machine learning. However, generative AI turns machine learning inputs into content whereas predictive AI uses machine learning in an attempt to determine the future and prevent bad outcomes by using data to identify early warning signs."
In considering what types of AI could prove fruitful in privacy program operations, we need to make similar distinctions in our work. What parts of the privacy program are generative or conversational? What parts are predictive?
It is worth systematically evaluating the privacy operational life cycle to determine the associated workflows that could be aided or completed by AI tools in a generative or predictive sense.
Assessment
The first step in the privacy operational life cycle is assessment. This includes the creation of data inventories, EU General Data Protection Regulation Article 30 records of processing activities, data protection impact assessments, privacy impact assessments, third-party risk assessments and so forth. These processes could benefit from a collaborative mix of generative and predictive AI.
There are already a variety of automation tools that scan databases and help generate data inventories and maps. These tools also aid in data collection about solutions and vendors through questionnaires. Generative AI tools could use all that data to create and maintain dynamic GDPR Article 30 records of processing activity. A predictive AI tool trained on regulatory requirements, privacy frameworks, and historical privacy violation information could use ROPA data to predict risks associated with individual vendors or solutions that process personal information.
Privacy programs could then feed those risks, other data collected and even source code into a generative AI tool to produce a PIA and/or DPIA. Generative AI tools could aid in the questionnaire process, using a chatbot-like interface to collect answers, provide clarity and reduce unnecessary questions that conditional logic may not always catch.
Protection
The second step in the privacy operational life cycle is protection. This includes privacy and data protection by design and default, cybersecurity and technical controls, privacy policy development and compliance management, third-party compliance, and data retention.
This phase of the privacy operational life cycle could benefit from generative AI. These tools could use inputs regarding proposed system functionality in the development process to produce privacy-related software requirement specifications and agile user stories for development teams. It could then test later developed source code against those specifications to assess compliance.
One of the seemingly more obvious uses of generative AI is the development of organizational privacy notices and various policies based on information security controls and data from the assessment phase, including the inventories, maps, Article 30 ROPA and PIAs/DPIAs. Generative AI trained on data retention regulations and the long-term business use of data could create customized and specific retention policies. It could further assist privacy pros by generating training plans and communication regarding company privacy requirements to help the entire workforce protect sensitive and private information.
Sustain
The third step in the privacy operational life cycle is sustainment. This phase includes further workforce training and awareness, and the development of privacy program key performance indicators.
I already addressed the use of generative AI to create training content. However, as an AI-empowered privacy program matures, predictive AI could also provide suggestions for targeted training based on trends, open-source threat intelligence, the results of phishing tests and tabletop exercises, third-party audits, and patterns of privacy noncompliance within the organization. Predictive AI tools could provide privacy program managers with suggested lead metrics by recognizing patterns of behavior that preceded prior privacy violations or incidents. Generative AI could then create reports based on those key performance indicators to help privacy program managers sustain their organization's privacy initiatives.
Respond
The final phase in the privacy operational life cycle is response. It is the step most privacy pros want to take: responding to data subject access requests. It is also the step most privacy pros want to avoid: dealing with the fall out of a data breach or incident.
Generative AI could interact with data subjects conversationally, while predictive AI makes suggestions regarding whether to approve or reject a data subject's request. Generative AI could aid privacy pros in the development of dynamic incident-response plans based on legal requirements, frameworks and best practices, open-source threat intelligence, forensics, security-incident response team expertise, and more.
Predictive AI could integrate business continuity and disaster recovery plans, which generative AI could likely help appropriate professionals create, with incident response plans to suggest next courses of action during remediation of a breach. Generative AI could create breach-notification letters, and, when the dust settles, predictive AI could use the information collected in the response to generate suggested security and privacy practice enhancements.
As the name of the privacy operational life cycle suggests, this is not a linear process. It's cyclical. Data produced during incident response could fuel the AI processes discussed in the assessment, protection and sustainment phases in a robust ecosystem of generative and predictive AI tools and processes that enhance privacy program operations and management.
Where is that elephant?
If the use of AI in the workplace is as inevitable as we are led to believe — and since organizations will continue researching ways to cut costs and increase efficiency — we should not assume the privacy profession will be immune from its potential.
I have cast a big vision — perhaps even naively — in which a large swath of privacy program operations are performed by AI. Let us keep the elephant in its place by remembering responsible AI does not replace humans. It helps them. "AI made me do it" will not be a good defense when regulators fine organizations for privacy and security violations that could have been prevented but for the organization's blind obedience to whatever predictive AI suggested or lazy use of whatever generative AI produced.
Additionally, much of the work of privacy requires legal advice, and, although some generative AI models have passed the bar exam, no jurisdiction is ready to give an AI tool a law license. As such, there should always be humans "in-the-loop" when it comes to AI-enhanced privacy program management.
Privacy pros must be experts, self-disciplined, and critical of products of AI tools in the privacy profession. They must work to ensure AI models are trained so they do not generate a detrimental overreliance on the work of AI tools. There should always be a place for privacy pros in privacy operations.
The chart below summarizes the use of generative and predictive AI tools throughout the privacy operational life cycle.
ASSESS
Data inventories
Scanning
Manual; automation supervision
ASSESS
Data mapping
Data lineage
Manual; automation supervision
ASSESS
Article 30 ROPA
Create the ROPA Document
Aggregation
Manual; AI supervision; automation supervision
ASSESS
PIA/DPIA
Suggest and/or predict data processing risks; suggest risk scores
Create the PIA/DPIA Document; Interact with developers conversationally
Questionnaire routing; report generation
Manual; AI supervision; automation supervision
ASSESS
TPRM
Suggest and/or predict third-party risks; suggest risk scores
Create Risk assessment documents; Interact with third parties conversationally
Questionnaire routing; report generation
Manual; AI supervision; automation supervision
PROTECT
Privacy/data protection by design and default
Suggest privacy-related software requirement specifications
Create agile user stories and similar documents
Manual; relational; AI supervision
PROTECT
Technical controls
Suggest appropriate technical controls
Create governance, risk and compliance, and technical policies
Manual; AI supervision
PROTECT
Policy development and compliance
Create privacy notices and policies
Manual; AI supervision; legal advice
PROTECT
Third-party compliance
Suggest and/or predict third-party noncompliance or audit focus areas
Interact with third parties conversationally
Manual; Relational; AI Supervision
PROTECT
Retention
Suggest retention periods based on regulatory requirements and the usefulness of old data
Create retention policies
Automatic purge processes
Manual; AI Supervision; Automation Supervision
SUSTAIN
Training and development
Suggest training topics based on open-source intelligence, threats, known privacy violations, etc.
Create training plans and content
Manual; Relational; AI Supervision
SUSTAIN
Key performance indicators, metrics, etc.
Suggest lead metrics for privacy violations based on historical previolation behavior
Create reports
PowerBI and similar reporting
Manual; AI Supervision; Automation Supervision
RESPOND
Incident response plans
Suggest specific threat scenarios
Create incident response plans
Manual; Relational; AI Supervision
RESPOND
Business continuity and disaster recovery
Suggest next steps in incident remediation
Manual; AI Supervision
RESPOND
Incident notification
Create incident notification letters
Manual; AI Supervision
RESPOND
Data subject access notifications
Suggest approval or rejection of a data subject request
Interact with data subjects conversationally
Scanning
Manual; AI Supervision