Try visualizing the Internet’s basic architecture. Could you draw it? What would be your mental model for it?
Let's be more specific: Say you just purchased shoes off a website using your mobile phone at work. How would you visualize that digital process? Would a deeper knowledge of this architecture make more apparent the myriad potential privacy risks in this transaction? Or to put it another way, what would your knowledge, or lack thereof, for these architectural underpinnings reveal about your understanding of privacy and security risks?
Whether you’re a Luddite or a tech wiz, creating these mental models of the Internet is not the easiest endeavor. Just try doing so yourself.
It is an exercise, however, that several individuals underwent for new research that has instructive implications for privacy and security pros.
"So everything I do on the Internet or that other people do on the Internet is basically asking the Internet for information, and the Internet is sending us to various places where the information is and then bringing us back." - CO1
You’d think those who have a better understanding of how the Internet works would probably have a better understanding of the privacy and security risks, right? Most likely. Paradoxically, though, a better technological understanding may have very little influence on an individual’s response to potential privacy risks.
This is what a dedicated team of researchers from Carnegie Mellon University worked to discover recently in their award-winning paper, “My Data Just Goes Everywhere”: User Mental Models of the Internet and Implications for Privacy and Security—a culmination of research from Ruogu Kang, Laura Dabbish, Nathaniel Fruchter and Sara Kiesler—all from CMU’s Human-Computer Interaction Institute and the Heinz College in Pittsburgh, PA.
"I try to browse through the terms and conditions but there's so much there I really don't retain it." - T11
Presented at the CyLab Usable Privacy and Security Laboratory’s (CUPS) 11th Symposium on Usable Privacy and Security (SOUPS), their research demonstrated that even though savvy and non-savvy users of the Internet have much different perceptions of its architecture, such knowledge was not predictive of whether a user would take the necessary steps to protect their privacy online. Experience, rather, appears to play a more determinate role.
Kang, who led the team, said she was surprised by the results.
The team conducted semi-structured interviews, complete with drawn-out mental models, of 28 participants—half of whom had technical backgrounds. “We did find a difference of understanding between the two groups,” Kang said. “Those with less technical knowledge drew simpler models.” Likewise, those with deeper technical knowledge “had more articulated models of the Internet as a complex system with varied hardware components and a more involved set of connections among components,” the research paper states.
"I think it goes everywhere. Information just goes, we'll say like the earth. I think everybody has access." - CO4
Perhaps not surprisingly, those with technical backgrounds were more aware of privacy threats, but, significantly, the researchers “did not find a direct relationship between people’s technical background and the actions they took to control their privacy or increase their security online,” the paper observes.
Instead, Kang and her team found that personal experiences more fully informed how users will act. She also said the research ultimately reveals that the interface is a key component in helping users understand the underlying digital structure, a finding that could be instructive for the policy and design of new technology and online services.
There were several reasons participants gave for not taking protective actions, including the now cliché “I-have-nothing-to-hide” (il)logic, the willingness to sacrifice privacy for convenience or usability, or even, for some, a feeling of a loss of control. Participants also demonstrated how personal experience has played a role in shaping their privacy perceptions, ultimately suggesting “that past negative experiences trigger more secure online behavior and a heightened level of privacy concern,” the report notes.
"...the Internet is not designed to be private ... at the end of the day, you're relying on correct implementations of logically sound security protocols and historically most implementations aren't correct and most protocols aren't even logically sound. So, it's just a question of an arms race of who's paying more attention." - TO6
Kang said these results mean that consumer education may not be the best way to nudge users to be more privacy protective. She posits, rather, that the focus should be on embedding privacy and security protections into the design of systems and applications to be most effective.
“Almost universally, participants’ privacy-protective actions or lack of action were informed by personal context and experiences,” the report states, “such as a feeling they had nothing to hide, and in some cases by immediate cues in the online environment such as a security emblem or famous company name.”
As the report points out, there is a “need for more research into privacy protections that reduce the responsibility on users to understand how the Internet works” and make privacy-protective decisions based on that technical knowledge.
Really, many of these conclusions highlight issues with which many privacy pros are already familiar: privacy and security by design, the outdated consent model, contextuality and corporate accountability. So while this important research continues to develop, privacy pros can continue to champion the privacy of their organization’s customer base and work closely with their engineering teams to make that a reality.