Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
The term "privacy" eludes quick definitions, and thus so do the jobs of privacy professionals.
Many privacy pros work on data security, artificial intelligence governance, law enforcement response, trust and safety, marketing permissions, and other matters. Does privacy encompass all these roles? It's unclear.
Academics who have tackled this question haven't reached a conclusion, either, and their work illustrates how academic debates can be relevant for privacy pros — particularly those interested in explaining what it is they do all day.
The academic debate
In various works, including his 2008 book "Understanding Privacy," Professor Daniel Solove argued "privacy" admits of no single definition, but is best understood as a taxonomy where multiple forms of privacy are related.
Professor Ryan Calo and Yale Law Resident Fellow Maria Angel argue this taxonomic view limits privacy. Set out in the 2024 article "Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy," their argument has essentially two concerns: first, if defining something as a "privacy" matter depends on recognition, particularly by powerful authorities, then privacy is stuck in existing power structures. But privacy shouldn't be only what the powerful say it is.
Second, privacy as a taxonomy makes it more challenging to distinguish "privacy" from other technological concerns, including free expression, human dignity and fairness. Privacy can, in some cases, advance things like human dignity. But as Calo, Angel and this author have noted, privacy can also be in tension with these concerns. Definitional boundaries might better help sort out priorities in these cases of tension.
Solove responds in his recent paper "Against Privacy Essentialism," noting the absence of a single definition of "privacy" means we may not need powerful authorities to select one. Moreover, Solove argues, his approach better manages policy tensions. For example, privacy has come to include concerns with algorithmic manipulation of consumers, which may not have occurred were privacy in a definitional box.
Defining what privacy is and does is also core to defining priorities, roles and responsibilities and budgets within an organization. And in turn, how those priorities are resourced has important downstream effects, given the role privacy pros play in addressing harms that may fall under the privacy umbrella.
Privacy on the ground
In "Privacy on the Ground," Kenneth Bamberger and Deirdre Mulligan set out an extensive survey of what privacy pros do all day. They describe the work of privacy pros as sorting out a "complicated, chameleon-like set of soft and hard, legal and social constraints and expectations." In business, there's reasons to want any "complicated chameleons" to choose a color and stick to it. As investors say, where there's uncertainty there's risk.
Here, that risk is to the effective support of a privacy team in an organization. Blurred team boundaries, management vagaries, or a sense the privacy mission is too "squishy," can have concrete impacts on privacy team budgets, roles, authority and careers. It can put privacy pros in a position of defending against these doubts.
There's another way this debate is relevant to privacy pros. As Bamberger and Mulligan found, in straining to scope privacy, organizations may lean into the belief that privacy is what consumers, or regulators, expect. For businesses inclined to focus on their customers, this has a natural feel to it and can bear good fruit. And, of course, regulatory expectations matter.
That said, the "voice of the customer," can turn into "the voice of the complainer," rather than an accurate view of what consumers want, for example, in a tradeoff between convenience and privacy. And regulators may push beyond what the law requires or adopt more dramatic views to attract attention.
Even if that's not so, if privacy is what consumers and regulators argue for, that can put privacy pros betwixt and between. That is, if "privacy" is what an organization's critics say it is, staff defending "privacy" may end up creating doubts as to whose side they're on.
Ambiguity as to what privacy entails strikes again.
The work remains
If management isn't clear what privacy professionals do or how privacy work adds value, make lists, identify impacts, and show returns on investment. Give the "you don't want to end up like that company" warnings. Mention 20-year consent decrees. Cite company values.
But also tell the story of the Chinese farmer who embraced uncertainty by making it a companion. When the farmer is asked his view of a series of events, his answer is consistently neither "good" nor "bad," but "maybe." This empowers him to recognize the tremendous variety of possible ways things might go, and to take action informed by awareness of his own limited capacity to define or control them.
The lesson for an organization's management is to not let definitional ambiguity forestall action when it's warranted. Don't go to extremes: neither limiting privacy to a strictly legal minimum, nor using it as a sales pitch or an excuse for unfair practices.
Let impact and probability measures guide what risks "privacy" should address, rather than definitions. Privacy could be something more than what authorities recognize, as Calo and Angel emphasized. Answering the question "Is this a privacy issue?" with "maybe" avoids excluding information harms that need addressed, as Solove emphasized.
In an organizational context, "maybe" means the organization is free to choose, ideally through dialogue on risks, goals and values.
At the end of the day, privacy pros' ability to articulate concrete harms, and their good faith commitment to address them, will likely go a lot farther than definitional boundaries.
Chuck Cosson, CIPP/US, is senior counsel, privacy and cybersecurity at REI.