The Information Accountability Foundation used the phrase “dynamic data obscurity” in 2015 after I organized a Washington dialogue and a Brussels session on the topic. With the Court of Justice of the European Union's "Schrems II" decision and draft legislation in Canada, it is time to bring the term back. Below is an update of my 2015 blog.

In 2030, I will be 80 years old and very dependent on data-driven technologies. In 10 years, I will not own a car and will instead share a vehicle with others and will have the vehicle available when I need to go somewhere. The next generation of vehicles will be fed by data in the cloud that is based on observation of me and millions of other travelers. The data will be used for a full range of activities from improving the design of future vehicles to billing for services. The car will take me to a clinic visit where the physician will already know my 180-day running history based on the interaction between my various embedded medical devices. All of that history will be generated passively based on consents forgotten long ago. The data will be shared between the many specialists charged with my health. When I arrive home, my house will recognize me, and the presets will customize my return experience (e.g., they will read my mood to decide if I want a decaf coffee or a single malt).

This is my future, and as you know, the technology to accomplish it is already here. For example, when I get into my wife’s car on the drivers’ side, the car senses it is me and sets my driving preferences. If I had a defibrillator in my chest, it would report events to my physician. So, the leap to a more observational future really is easy to predict. What still is not easy to predict is how that observational data will be governed to serve me and my community in a thoughtful and fair way. To the public observation is tracking, and tracking makes them nervous, even as they have become dependent on things, like cars and medical devices, that are observation dependent. I believe one of the keys will be an evolution of the concept of dynamic data obscurity, where data is governed down to element level. In many ways, the foundation for the concept may be found in the increased definitional requirements for pseudonymization found in the EU General Data Protection Regulation. 

Data is not good or bad; it is when and how it is used that defines benevolence and maliciousness. Data receives context from other data. Data must be linked for that context to exist. There are enormous risks to people and society when data is linked for the wrong reasons and by the wrong people. Based on those risks, the default position is that data in transit and data at rest should be obscured robustly. For that reason, encryption is very common. 

Many applications also may be run with data still in an obscured fashion. Deidentification was in wide use back in the 1980s. Most analytics can be run with data links obscured. However, for accuracy purposes, data links must be used to match the data together before they are processed. The rules around when to obscure links and when not to must be robust and must be enforced. Technical measures must be used to protect those linking processes. But, in the end, no single, bright-line rule can exist that covers all the human interests at issue when data come together. It is policy processes that, in the end, carry that burden. That is why the word “dynamic” is used in the phrase “dynamic data obscurity.”

Today, there is still policy confusion about the most basic of processing terms. There is even further confusion about processing objectives and which rules should be applied to which processes. However, the basics are data that links to people should be governed by dynamic rule sets. Data links should be governed for the brief time they are necessary to achieve a legitimate objective unless demonstrable assessments determine that the data links should be used. Again, that is why the term “dynamic” is used.  

Other terms will have to be defined and defined in a global universal manner. There has been a great deal of work done by think tanks and academics to define the terms deidentified, anonymous and pseudonymous. However, there need to be political definitions for those terms that are broadly accepted. 

So, after five years, and before another law is enacted, let’s return to the concept of dynamic data obscurity. Let’s determine before 2030 is here which terms still need to be defined for global use and what rules still need to be set in place.  

Photo by Camila Quintero Franco on Unsplash