TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | What Anonymization and the TSA Have in Common Related reading: $30 Million Investment Will Test Encryption’s Ease of Use

rss_feed

""

""

What does the anonymization of data—the masking of private information by using a single, unchanging identifier to hide connections between data and data subject (also known as “static anonymity”)—have in common with the tiresome kabuki theater that the Transportation Security Administration (TSA) agency of the U.S. Department of Homeland Security requires us to go through at airport checkpoints? Not much, it would appear. But upon closer examination, both encourage complacency by fostering not only a false sense of security but a false sense of utility as well.

We’re not discounting the value of anonymization; it powered the growth of the Internet. But today, technology, markets, applications and threats have evolved while the protocols to keep personally identifiable data anonymous have not. If we are to mine the vast potential of data analytics to create high-value products and services that improve and even save lives while meeting the privacy expectations of the public and regulators, we need new tools and thinking.

Stop Patting Down Grandma

To understand the problem, consider the great failure of TSA airport screening. In treating every passenger as an equal threat, the agency collects more data than it can possibly analyze for patterns of terrorist activity, violating privacy for no real gains in security or utility. That’s why aviation security experts are calling for the TSA to vary screening intensity based on passenger profiles … acknowledging that 88-year-old grandmothers are unlikely to be carrying explosives.

Big data merely compounds the TSA’s error. In data analytics, the phrase “You can have privacy or value, but you cannot have both” is accepted as an axiom, but it’s actually a dangerous fallacy. Zero privacy reduces the value of data because it does not filter out anyone or anything, leaving too many choices and an excess of noise. Zero privacy can also subject an identifiable data subject to potential discrimination and harm while exposing data processors to potential liability.

On the other hand, complete data anonymity restricts the relevant data that could be used to protect individual health and safety while fueling useful, valuable products and services. The irony, of course, is that static de-identification schemes, which can make data all but useless to authorized users, can be easily broken by unauthorized ones. The result? A false sense of security—in which privacy protection satisfies neither consumers nor regulators—and utility, in which our all-or-nothing privacy approach either buries organizations in irrelevant data or denies it to them altogether.

The false dichotomy between privacy and value fuels misunderstandings and misconceptions while impairing the ability of organizations to fully leverage the commercial potential of big data. However, once we can move past this simplistic thinking, our resources can be put to better use. There are other paths forward.

Dynamic Data Obscurity: Bridging the Privacy-Value Gap

Recently, Martin Abrams, executive director of the Information Accountability Foundation, wrote

“When one believes in accountability based information policy management one is always looking for controls that are effective and will be trusted by enforcement agencies. Controls are what make it possible for an organization to make promises and be able to demonstrate their integrity. Controls are a combination of policies with penalties and the technology tools to make those policies work …We believe the solutions are part of a field we have begun to call 'Dynamic Data Obscurity.' Dynamic data obscurity involves obscuring data down to the element level when that level of security is necessary, and making sure that rules which control when elements can be seen are real and enforced. Dynamic data obscurity is also about making the technology controls harder to break but still allowing for appropriate uses. It requires both new technologies combined with effective internal monitoring and enforcement.”

Dynamic data obscurity improves upon static anonymity by moving beyond protecting data at the data record level to enable data protection at the data element level. Dynamic data obscurity empowers privacy officers to improve the “optics” of data protection for data subject, regulators and the news media while deploying next-generation technology solutions that deliver more effective data privacy controls while maximizing data value.

An Ethical Approach

Anonos is a participant in the Information Accountability Foundation and supporter of the foundation’s commitment to real demonstrable responsibility and believes that a central core of data privacy ethics is the ability to demonstrate that you can, in fact, keep your promises. Dynamic data obscurity technologies enable companies to show data subjects that in addition to coming up with new ways to derive value from data, they are pursuing equally innovative technical approaches to protecting data privacy—an especially sensitive and topical issue given the recent epidemic of data security breaches around the globe.

We've therefore developed an approach to dynamic data obscurity—we call it Dynamic Anonymity—that dynamically segments and applies re-assignable dynamic de-identifiers (DDIDs) to data stream elements at various stages. This significantly reduces the risk of personally identifying information being unintentionally shared in transit, in use or at rest. Meanwhile, trusted parties, in accordance with permissions established by or on behalf of data subjects, maintain the ability to "re-stitch" the data stream elements together. In addition to protecting anonymity, Anonos Dynamic Anonymity also allows data end-users to selectively filter only those elements that they find useful, thereby reducing noise and increasing the utility of the data stream.

Vibrant and growing areas of economic activity—the “trust economy,” life sciences research, personalized medicine/education, the Internet of Things, personalization of goods and services—are based on individuals trusting that their data is private, protected and used only for authorized purposes that bring them maximum value. This trust cannot be maintained using static anonymity. We must embrace new approaches like dynamic data obscurity to both maintain and earn trust and more effectively serve businesses, researchers, healthcare providers and anyone who relies on the integrity of data.

Comments

If you want to comment on this post, you need to login.