From shadow IT to demonstrable DPIAs: Building visibility for modern privacy programs

A look at creating system-aware, defensible and survivable data protection impact assessments.

Contributors:
Michael Moore
AIGP, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, FIP
VP, Head of Legal and Privacy
Glean Technologies Inc.
Michael Bishop
CIPP/A, CIPP/E, CIPP/US, CIPM, CIPT, FIP
Legal Director
Resmed
Privacy practitioners have run data protection impact assessments across enough organizations to see the same pattern: the ideal state looks great, but the inputs arrive so slowly that the assessment goes stale before it's finished.
Some environments can't even reveal a reliable system inventory, let alone knowledge of what personal data each one holds.
On paper, a DPIA is a structured exercise: map processing operations, understand necessity and proportionality, identify risks to individuals' rights, implement controls. In practice, it feels like a half-finished jigsaw puzzle with no picture on the box. Without the whole picture, any DPIA becomes a patchwork rather than a map.
The practitioner's challenge is not whether to conduct DPIAs, but how to do them in a way that is system-aware, defensible to regulators, survivable for a small privacy team and doesn't create privacy fatigue across the organization. In addition to traditional "spreadsheet and checklist" approaches, newer tooling such as a software-as-a-service-aware data inventory and classification tools can do much of the legwork and greatly shorten the process and reduce burden on business partners who own the data systems.
Tooling just speeds up visibility. The actual enforcement including a single sign-on policy, procurement controls and owner accountability is what keeps the map accurate.
Where does the organization's data sit, and is it visible?
Most DPIA headaches start with a deceptively simple step: identify where personal data lives. Environments typically fall into four buckets.
Official, IT-blessed SaaS. These are tools including customer relationship management, human resources information systems, marketing tools, collaboration suites, generally systems wired into single sign-on or centrally procured. Because these are formally approved tools, contracts and data protection assessments will be documented.
The problem: IT keeps these systems available and integrated, but typically not tracking what data is inside at the record level. From a practical standpoint these are often the easiest to start with, and likely where many DPIA analyses end.
Paid shadow IT. Teams swipe cards for analytics platforms or niche tools, including artificial intelligence monthly subscription tools. Finance might see the charge. Legal might see a DPA, if one exists, and if a user actually flagged the tool to legal. There is no consolidated inventory of these tools, so usage patterns are opaque and personal data exposure is unknown.
For many organizations, paid shadow IT is the gap between "we have contracts" and a robust DPIA program. Even when the tool has privacy controls, most teams never configure them because nobody sees themselves as the admin or feels responsible for managing the risk.
Non-SaaS and air-gapped systems. These include manufacturing lines, lab equipment, biometric gate controls and logistics gateways. They are often air-gapped for ransomware resilience and run by operations teams who see uptime as the priority and privacy questions as a distraction.
Operations are measured by output and turnaround time, so there is little incentive to complete privacy questionnaires and they may be seen as "big corporate" overhead.
Unfortunately, these are the systems that show up in headlines: biometric readers capturing truck drivers' fingerprints without consent, resulting in multimillion-dollar Illinois' Biometric Information Privacy Act settlements. They may not be glamorous, but they often process the most sensitive data including location data, identity credentials and biometric identifiers.
Free shadow IT. This is where chief information security officers start losing sleep, because visibility and accountability are both missing and there is no single way to address the problem.
It is also the most difficult technology used to police and employees who use the tools for free outside of official company systems are often hesitant to own up to their use for fear of reprimand. Such tools include freemium products, personal cloud storage and consumer messaging apps. There are no invoices, no DPAs and no central owners within the organization.
When organizations do become aware of shadow IT, they are limited to using web proxy logs or cloud access security broker tools to infer where personal data is going, but without knowing what data or how much.
This is a governance nightmare for many organizations and creates friction between privacy teams and business owners.
What regulators actually care about
Regulators know 100% coverage is impractical. They look for reasonable efforts organizations can show without scrambling. And privacy teams can only assess the data repositories they know about.
The shadow IT or disconnected systems often live out of sight, but they still pose a substantial privacy and governance risk across many privacy regimes.
U.S. regulators are paying attention. At the state level, the California Privacy Protection Act, Colorado Privacy Act and Connecticut Data Privacy Act require data protection or risk assessments. At the federal level, Federal Trade Commission enforcement has increasingly focused on documented pre-deployment assessments and controls for biometric and other high-risk automated systems, as well as the Health Insurance Portability and Accountability Act Security Rule risk analysis and Gramm-Leach-Bliley Act Safeguards Rule requirements.
The FTC's Rite Aid consent decree from December 2023 is a key example. The agency didn't just penalize the privacy harm, it mandated Rite Aid to document pre-implementation assessments for all future biometric deployments, signaling that process failures will be treated as independently actionable.
In Australia, the Office of the Australian Information Commissioner positions PIAs under the Privacy Act 1988 and OAIC PIA Guidelines as a core privacy-by-design mechanism. Australian government agencies must conduct PIAs for high privacy risk projects, and for private organizations, PIAs are recommended as a practical way to demonstrate compliance with APP 1 on the open and transparent management of personal information.
In the EU, risk management requirements for high-risk AI systems under Article 9 of the AI Act overlap operationally with Article 35 of the EU General Data Protection Regulation on DPIAs. Organizations deploying AI in human resources, biometrics or critical infrastructure face overlapping assessment obligations that reward a unified discovery approach rather than siloed compliance exercises.
Regulators are tending to focus on patterns of superficial DPIA practices, including those that fail to address risks, that rely on signed DPAs as a substitute for real due diligence, that have no clear record of scoping or prioritization, or that treat DPIAs as one-off paperwork.
Organizations don't have to be perfect, but they do need to show demonstrable, documented assessments with diligent discovery.
Why interviews and spreadsheets don't scale
Historically, DPIAs have been powered by email and hope. Inventory accuracy is an incentives and ownership problem, not just a discovery problem. Typically, privacy teams send an inventory spreadsheet and ask other teams to share their systems and data.
Then weeks later, if ever, partial responses trickle in. Complex items become one-off meetings and take scarce and expensive engineering and stakeholder time and lead to frustration with privacy teams and incomplete analyses.
This process creates several persistent problems. It produces coverage bias, because only teams that respond to emails are assessed, while shadow IT and legacy systems are often missed. It leads to privacy fatigue, as business teams come to associate privacy outreach with repetitive forms leading to little benefit. And it results in static snapshots: inventories that quickly go stale as organizations grow and add new SaaS tools.
Automation can greatly assist with the data gathering process, freeing privacy practitioners to focus on reviews and judgment.
This guidance is most applicable to organizations that use predominantly cloud-native SaaS infrastructure. Organizations that rely on significant on-premises or hybrid environments may need to supplement these approaches with additional asset inventory tools designed for legacy systems. Even so, even partial automated coverage of the main SaaS tools and systems beats the manual alternative.
Use SaaS as an accelerator to supplement DPIA reviews
Start where leverage exists: SSO-connected SaaS. Use admin application programming interfaces, audit logs and connectors to build a live application inventory with identified owners, business units and regions.
Next, classify the data within those systems and identify structured versus unstructured data, and the relevant business context — for instance, HR, marketing, support or finance. Weigh the risk by scale: a tool holding 30 million contact records does not present the same risk as one holding 3,000.
Then, focus on what drives DPIA priority: sensitive data. Flag likely GDPR Article 9 exposure, in the processing of special categories of personal data like health or biometrics data, but treat automated classification as a triage mechanism and not the final answer.
A staged, risk-based approach
With connected visibility into systems, privacy teams can move from theoretical governance to a pragmatic approach. Start by building a baseline inventory across SSO-connected SaaS tools, classify them by business function, data types and special-category presence. Prioritize DPIAs for systems processing special-category data, which should be confirmed by a human review, at scale, involving systematic monitoring, or that are critical to core services.
Use that inventory to drive focused conversations. Instead of asking teams to list every tool they use, privacy teams can point to specific tools, systems and data footprints and ask for confirmation of purposes and mitigations. This takes the investigation burden off business units.
Address the hard edges. No tool magically discovers every USB stick or isolated factory system. Additional techniques like network traffic and sign-in patterns can be used to flag likely free shadow tools, in partnership with information technology and/or security teams. For air-gapped systems, use organizational charts to identify manufacturing, research and development, and logistics environments where biometric data is likely, then plan targeted DPIAs.
Perfection is not necessary if organizations show what steps were taken, where blind spots remain, and how residual risk is managed. For legal risk analysis requiring privilege protection, consider whether documentation should be prepared under attorney direction and maintained separately from business operational documents.
Practical takeaways
Regulators don't demand perfection, but they do scrutinize superficiality. Expect to be judged on systematic, thorough, documented DPIAs.
Start with connected SaaS tools as that's where most data lives in modern organizations. A connected discovery layer provides a fast path to meaningful coverage, and sharing visibility gives privacy, security and legal teams the same map, making it easier to align on controls and defend decisions. Reviews can also be triggered based on actual changes like new systems or patterns and volume spikes, rather than static calendars to keep DPIAs up to date.
Treat special-category data as the North Star. Focus DPIA energy on systems processing health, biometric and sensitive data at scale, and ensure human review of these systems.
Organizations that move DPIA practice from questionnaire-driven guesswork to connected, evidence-based mapping are no longer outrunning an impossible standard. They're building demonstrable governance regulators expect. The organizations that survive regulatory scrutiny aren't the ones with perfect coverage, but the ones that can show their work.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Michael Moore
AIGP, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, FIP
VP, Head of Legal and Privacy
Glean Technologies Inc.
Michael Bishop
CIPP/A, CIPP/E, CIPP/US, CIPM, CIPT, FIP
Legal Director
Resmed



