Yet again, this year’s Privacy Law Scholars Conference featured some of the leading thinking in the field, and yet again, the IAPP was proud to award $2,500 and a speaking role to the authors of those two papers voted as the best of the best.
After a couple of years featuring co-written papers, this year we’ve awarded two single authors for work that, on the one hand, looks back at the history of the Social Security number and, on the other, offers a path toward a new and better form of consumer-protection regulation.
Sarah Igo, an associate professor of history at Vanderbilt University, penned the former as part of her not-insubstantial work in developing a book on the history of privacy in the U.S. in the 20th century. “I’m looking at different moments in the thinking about privacy, and how they’ve shifted and changed and reversed, in order to show that it is quite a changeable subject,” Igo says, over the phone from her office in Nashville. “I’m hoping to give us a sense of ways forward by looking backward.”
Lauren Willis is the author of the latter piece, written for the University of Chicago Law Review, as well as a former litigator at the U.S. Department of Justice and Federal Trade Commission (FTC). Speaking from Loyola Law School in Los Angeles, where she is a professor, Willis says she’s “become increasingly frustrated in my work looking at various ways in which regulation seems to be trailing behind firms, always running to catch up and always behind.”
Igo’s work is a jump off from her first book, 2007’s The Averaged American, which studied the way that surveys and polls began to infiltrate life in 20th-century America and how it changed the tenor of public discourse. Along the way, “I got interested in privacy as a value,” she says, “and thinking about how new modes of disclosing information became part of American life.”
That led to further exploration and thinking about how new technologies make people more visible and make it harder to hide while operating in society. Developments like photography, fingerprinting, the telegraph, the telephone and the Internet all began to look very similar at a rhetorical level.
Consider, for example, the development of the Social Security number (SSN). While it would eventually create a much higher level of distrust about what the government was doing with personal information and data collection, it originally was something coveted by the public at large. While some political alarms went off, Igo says, because the SSN was originally only granted to people with gainful employment in a time, the Great Depression, when employment was a status symbol, most people were eager to be numbered.
In fact, the biggest privacy worry of the time was the employer, maybe because, Igo reasons, the government at the time “was so abstract and anonymous. People who aren’t present in their lives.”
There are lessons we can draw from that, Igo says. First, consumer education is vital for a truly fair privacy environment. The U.S. government, for instance, thought it was pretty clear when rolling out the SSN: One person, one number. “But that was clearly not evident,” says Igo. You had people who thought they’d get a new number when they got a new job. Some borrowed SSNs from advertised numbers without thinking they were being nefarious at all. Some would lose their number and figure they could just get another one.
“The consumer education piece is always undervalued in rolling out these sorts of systems,” says Igo.
Second, says Igo, “we’re missing something if we only focus on new technologies or new motives that are leading to invasions of people’s privacy. There are other parts of social experience and life that play into whether they consider it an intrusion at all.”
Originally, the SSN “was something prized; the government knowing you meant that you were a protected citizen, someone with benefits and to be taken care of … It might be impossible to recapture that kind of trust, but if we want to really think about that problem, it’s a problem that’s beyond tech, or law and regulation, but it’s about trust and norms about what people are doing with each other’s information. It’s striking to me that if you leave out politics and the longer history of this, you imagine that people always would have been wary of identification projects, and being numbered would always seem like a threat.”
With our current disclosure regulation, she argues, firms are basically able to obfuscate what they’re doing while still complying with the law. Thus, “it effectively means that competition cannot really center around privacy because people do not accurately understand what’s going on with their information. They can’t make choices between firms, and they often fail when they try to make choices.”
So, what’s the answer? How about, Willis offers, actually measuring consumer understanding of what’s going on with their data? Let’s move, she argues, from a disclosure model to an understanding model, which would better align the firms’ actions with the regulators’ goals.
Firms could experiment: “Send someone a South Park video clip to explain what’s going on with your data; send a graphic; give somebody a test,” she says. “There’s all this evidence that testing is a great way to teach. You can think of all sorts of ways that firms might approach this other than the text-based model that we know doesn’t work.”
Then, as with industries like the toy market, the companies could hire a third-party testing firm to evaluate whether consumers actually understand what’s going on. If the company reaches some predetermined benchmark, they can continue with their operations. If not, they’ve got to try something else and prove to regulators they’re fixing the issue, or face sanctions.
Currently, Willis says, “The regulators come down on, or the courts come down on, firms for not having crossed a T or dotted an I, when that was irrelevant, and they don’t come down on the firm that’s really messing with consumers because they actually complied with the letter of the law.” Wouldn’t it be better, Willis wonders, if companies rather had “to get to a place where a specified proportion of consumers understand basic things with what’s going on with their data”?
Even allowing for creativity of notice isn’t enough without the follow-up testing, she says. Just look at the vaunted information on the cereal box. Do people really understand serving size, how many servings they’re consuming, what’s the right amount of sodium or sugar? Without regulation that tests understanding, companies will continue to try to confound customers if it’s in their best interests. Would simpler red-yellow-green privacy birds universally applied help consumers? Maybe. But “we can only get to that point if there was a regulatory reason for firms to do it,” Willis says.
Of course, she says, this might be rolled out first to companies with a larger footprint. And there are issues to figure out like how the testing results would be fed to the regulators and how to approve testers and how to insure confidentiality by the testers, but self-regulation is already firmly established in privacy, and this is just another flavor of it.
“One way this strategy might be pursued is that those firms who have been found to engage in deceptive practices,” says Willis, “well they could be required as part of the consent decree to have the testing. They did that in the ‘70s with Hawaiian Punch. They had managed to fool most Americans into thinking it was fruit juice, so the FTC said they had to convince 70 percent of the fruit juice-drinking public that they contain less than 10 percent fruit juice. Rather than put this new label on your Hawaiian Punch that no one is going to look at, because no one is going to notice it.”
The Food and Drug Administration does something similar, as well, where over-the-counter drugs have to be tested to make sure consumers understand the directions and the possible side effects. And the EU is testing this in some areas with food, like forcing firms to prove that consumers understand claims about low-fat aren’t also claims about a food’s organic nature or that it’s low fat.
Could the idea work in privacy? We won’t find out without trying. Willis thinks it’s time to give it a shot.
If you want to comment on this post, you need to login.