Privacy law in the U.S. is weaker than in most places, but hey, at least we’ve got Section 5.
While many countries around the world have affirmative privacy protections for most data, the U.S. instead enforces a hundred-year old prohibition against deceptive business practices to merely prohibit companies from tricking people about data practices. In recent years, the FTC has expanded its interpretation of Section 5’s ban on deceptive practices to apply not just to misstatements but also to affirmative omissions—that is, when by failure to mention a potentially controversial privacy practice, the company is effectively trying to deceive consumers. This line of enforcement is all in the name of creating external accountability for privacy practices, and a transparent market for personal information. This market is far from perfect, and I think the law should do more to empower people to assess various privacy practices and control the flow of their information.
Still, at bottom, the U.S. has always had one (fairly low!) baseline: don’t lie about what you’re doing.
Recently, however, even this weak standard has been called into question—by two sitting Commissioners of the FTC no less. Commissioners Maureen Ohlhausen and Joshua Wright have both indicated that the FTC shouldn’t bring deceptive practice cases against companies absent some objective assessment of consumer harm.
This is an extremely dangerous idea that risks upsetting consumer self-determination—and supplanting individual choice with a paternalistic assessment of values by a regulator.
People are increasingly taking privacy considerations into account when making market choices. A desire for occasional seclusion and some control over personal information are core human values, and sometimes we might want to demand some assurances from companies we interact with about how our information is going to be treated. Now others might think those personal choices are irrational—they might argue that personalized advertising is completely benign, that total surveillance is both inevitable and desirable, and that you’d have to be wearing a tinfoil hat to try to limit data collection.
But that’s not someone else’s decision to make for me: individuals should be free to value considerations such as privacy however they want. In economics, this idea is called utility—the degree of subjective satisfaction that an individual derives from certain choices. Privacy law should—at the very least—encourage greater transparency about privacy practices so consumers can make their own determinations about the value of privacy.
I might, for example, be willing to pay $30 a month to limit behavioral advertising by my Internet service provider. Apple and Fitbit recently adopted policies to prevent wearables’ information from being sold to data brokers. If these privacy promises are violated, should that deception be sanctioned, or should regulators first engage in a cost-benefit analysis to determine if I was really harmed by the transfer of my personal information?
Commissioners Ohlhausen and Wright, however, both dissented from the FTC’s decision. Commissioner Ohlhausen summarily concluded that the FTC “should use its limited resources to pursue cases that involve consumer harm” without explanation. It’s not clear from this statement how this harm should be determined or by whom, but in other public statements, Commissioner Ohlhausen has argued that the FTC should exercise regulatory humility by only taking cases where it determines that deceptive practices lead to concrete and observable harms—as determined by the FTC. She has also argued for considering the benefits of certain data practices when deciding whether to bring a case for deception.
Commissioner Wright in his dissent focuses primarily on the materiality of the deceptive statement instead of harm (arguing that Nomi’s policy offered consumers another, easier way to opt out of data collection), but elsewhere he has advanced the idea that the FTC should only act in the face of objective harm. In a recent speech to the Chamber of Commerce, he stated that the FTC should articulate “cognizable” harms before intervening, and stated—or at least strongly implied—that Nomi’s failure to offer its promised opt-out did not result in an injury, at least not compared to the business insights provided by Nomi’s tracking. He also emphasized the importance of doing a “cost-benefit analysis” before taking cases, and argued that the FTC should weigh the harms and benefits of a particular practice in its deception cases as it already does in enforcement cases alleging “unfair” business practices.
Unfairness and deception are very different concepts, however.
Under its unfairness authority, the FTC is statutorily required to make a value judgment prior to intervening on behalf of consumers unable to protect themselves. Under its deception authority, the FTC is only supposed to look for statements and practices that are likely to mislead consumers trying to make their own decisions. It should not be incumbent upon the FTC to query whether a company’s misrepresentations to a consumer led to a loss from the FTC’s perspective but rather from the individual’s. If a used car dealer offered an F-150 for sale but delivered a Silverado, a regulator shouldn’t perform a cost-benefit analysis about which is the better truck. It should instead require a transparent market where people get what they pay for—whether regulators think those decisions are rational or not.
There is no question that consumers benefit tremendously from a lot of data collection, but privacy authorities must not paternalistically permit privacy deception because they believe the possible benefits outweigh individual concerns that they deem unworthy. Commissioner Wright recently said about the Internet of Things “the fact that there are millions of data points is not—in and of itself—a privacy risk.” I think I would disagree with that and others might too.
As individuals’ ability to enforce the law themselves continues to erode, the FTC has an obligation to hold companies accountable for the assurances they make, and not substitute their own views about the merits of personal privacy for the views of self-interested consumers.
If you want to comment on this post, you need to login.