At the Forum on Data Privacy convened by the Massachusetts Attorney General’s Office at MIT this week, Massachusetts AG Maura Healy called for collaboration between the technology innovation community and her office to prevent misuse of big data and data analytics to harm consumers.

While expressing admiration and gratitude for Massachusetts’s unique contributions to innovation as home to some of the worlds’ top research’s institutions and tech companies, and acknowledging the many benefits data-driven technologies can offer consumers, she reminded the audience of roughly 300 that her office frequently receives consumer complaints arising from “abuse and misuse” of these innovations.

“Big data, if used improperly, can be used for discrimination – intentionally or unintentionally,” Healy said. She gave examples of marketing companies using consumers’ race or socioeconomic status to allow or promote predatory lending, or price discrimination online based on consumers’ location, race, gender, or age.

“One in four Americans has an error on his credit report, which can be a problem for them when buying a home or searching for a job,” Healy declared, adding that, “consumers are entitled to be judged on the basis of accurate information.” She pledged that her office would continue to work with companies to prevent discrimination and fraud online.

Privacy and Technology

A panel moderated by Danny Weitzner, Director of MIT’s Internet Policy Research Initiative, discussed a variety of consumer privacy risks caused by big data, including racial profiling. They also discussed the challenges of adopting a one-size-fits all privacy standard when consumers vary in their privacy interests and sensibilities depending on the context.

MIT economist Catherine Tucker called this the “privacy paradox.” She noted that although people often say that they care about privacy, their actions contradict this under certain circumstances. In her study, she gave $100 in bitcoins to a number of MIT undergraduates and asked the participating students for their friends’ names and contact information. The students largely refused to share the information, submitting false and even vulgar names rather than disclosing their friends’ identities. They then offered the students pizza, and in return the students willingly disclosed their friends’ names and email addresses – even when they had previously self-identified as caring deeply about privacy.

This so-called “privacy paradox” poses challenges for online companies managing notice, choice and consent.

John Moore of Twine Health described a similar paradox in health care. “If you ask patients about data sharing,” Twine said, “at first they’ll hold their health data very tight. But as soon as they have an experience in which data sharing helps them in their lives, or helps others, they will choose to publish their data widely even beyond their trusted care providers. “

Dipayan Ghosh of Facebook offered some answers. He suggested creating systems to manage privacy in context – in essence “privacy on the ground” – describing Facebook’s internal privacy team as promoting the principles of “transparency, control and trust.” The team works closely with the product development team and the legal team to “really understand the product, factor out consumer harm, and constantly think about privacy top to bottom.”

In response to the concerns about big data and unfair practices, Ghosh said Facebook avoids advertisers that target consumers discriminatorily, and does not use data like race or gender in working with advertisers (although, according to Ars Technica, Facebook does use “ethnic affinity”). He also said Facebook “does not allow the types of scams that might occur on other platforms.” Indeed, in her opening remarks AG Healy had pointedly acknowledged Facebook’s collaboration in working closely with her office to block online predators targeting vulnerable Facebook users.

Privacy and Regulation

A second panel moderated by Harvard Law School’s Christopher Bavitz discussed the role states can play in protecting consumers through state-level laws and enforcement efforts.

Quentin Palfrey, formerly of the White House Office of Science and Technology Policy, declared that big problem in this space is that Congress is broken. “There are tools society needs to both protect consumers and promote innovation online that call for federal legislation,” he said, “but Congress is not doing its job.” He said the nation’s “patchwork” of privacy laws are “bad,” and that adding states to the mix would make things more complicated. The White House’s proposed Consumer Privacy Bill of Rights, Palfrey advocated, should have provided the basis for a process that would ultimately lead to federal legislation. In the absence of it, he said, the privacy paradox will continue while consumers remain “totally freaked out,” and in despair about privacy.

Palfrey called on states to help fill the gap that Congress ought to be filling. State Attorneys General can play a slightly more active role in this space, he suggested, through (1) law enforcement actions to send a clear signal that certain things are not tolerated; (2) the use of regulations to define clearly “bad behavior” that we agree is harmful to consumers, such as advertisements targeted to vulnerable consumers; and (3) calling on legislative bodies to put in place something like a Consumer Privacy Bill of Rights.

The Office of the Massachusetts Attorney General, represented by Sara Cable, pointed to the state consumer protection law – similar to the Federal Trade Commission Act – which empowers the state to investigate and enforce against unfair and deceptive trade practices. The vague nature of “unfair and deceptive” terms, however, made the statute “tricky to use.” She suggested the agency should use its rulemaking authority – as it did this week for fantasy gaming – to further define “unfair and deceptive” under particular circumstances. She questioned whether certain information should be excluded from use in all circumstances, or just under certain circumstances, or whether a better approach would be to require notices and disclosures instead.

States passing laws piecemeal could have a significant negative impact on global data transfer, however. Cameron Kerry of MIT and Sidley Austin warned that the U.S. “needs to at least keep things interoperable so that data can flow.” A privacy baseline established through a federal Consumer Bill of Rights, for example, is preferable to the “losing proposition” of the current sectorial approach, he said. He called for broad and flexible legislation that can change with the times, criticizing Massachusetts’s data breach law for identifying specific kinds of PII that might be harmful if misused to the exclusion of a great deal of other data whose abuse may be harmful.

Sarah Holland of Google noted her company can manage the patchwork of laws, including the lack of a national breach notification law, because they can afford great lawyers. But she observed that most start-ups do not have the same resources as Google and suggested that innovation might benefit from more uniformity nationwide.

Although Google considers itself to be the gold standard for clear consumer communication and trust, many on the panel felt that notice, consent and control were not working well. They agreed that any legislation – on the federal or state level – should be technology neutral, and should be based on the harm that can result from data uses, rather than singling out specific types of personal information.

photo credit: The dead soldier's silence sings our national anthem. -Aaron Kilbourn via photopin (license)