The digital revolution transforms society as we know it. In addition to economic and social progress, every technological revolution also brings along disruption and friction. Social resistance to the excesses of the new data economy is becoming increasingly visible and leads to calls for new legislation.

Commentators argue that a relatively small number of companies are disproportionately profiting from consumers’ data and that the economic gap continues to grow between technology companies and the consumers whose data drives the profits of these companies. Consumers are also becoming more aware of the fact that free online services come at a cost to their privacy, where the modern adage has become that consumers are not the recipients of free online services but are actually the product itself.

U.S. legislators are responding by proposing prescriptive notice and choice requirements that intend to serve the dual purpose of providing consumers with greater control over the use of their personal information and, at the same time, enabling them to profit from that use of their information.

For example, Gov. Gavin Newson, D-Calif., has proposed that consumers should “share the wealth” that technology companies generate from their data, and former Democratic presidential candidate Andrew Yang recently launched a Data Dividend Project to advocate for consumers to own their data and be paid for its use. Jaron Lanier and E. Glen Weyl propose the creation of data unions that would negotiate payment terms for user-generated content and personal information supplied by their users. In each case, the goal is to spread the wealth created by consumers’ data to the consumers themselves.

Though these attempts to protect, empower and compensate consumers are commendable, the proposals to achieve these goals are actually counterproductive. The remedy here is worse than the ailment.

To illustrate the underlying issue, let’s take the example of misleading advertising and unfair trade practices. If an advertisement is misleading or a trade practice unfair, it is intuitively understood that companies should not be able to remedy this situation by obtaining consent for such practice from the consumer. In the same vein, if companies generate large revenues with their misleading and unfair practices, the solution is not to ensure consumers get their share of the illicitly obtained revenues. If anything would provide an incentive to continue misleading and unfair practices, this would be it.

As always with data protection in the digital environment, the issues are far less straightforward than in their offline equivalents and therefore more difficult to understand and address. We will need to take the time to understand the underlying rationales of the existing rules and translate them into the new reality. Our main point here is that the two concepts of consumer control and wealth distribution are separate but intertwined. These purposes are equally worthy but cannot be combined. They need to be regulated separately and in a different manner. Adopting a commercial trade approach to privacy protection will ultimately undermine rather than protect consumer privacy. To complicate matters further, experience with the consent-based model for privacy protection in other countries (and especially under the EU General Data Protection Regulation) shows that the consent-based model is not the panacea to achieve privacy protection.

Why should we be skeptical of consent as a solution for consumer privacy?

The consent model is based on the assumption that as long as people are informed about which data are collected, by whom and for which purposes, they can then make an informed decision. In a world driven by artificial intelligence, however, we can no longer fully understand what is happening to our data and the concept of free choice is undermined by the very technology our laws aim to protect us against. The underlying logic of data-processing operations and the purposes for which they are used have now become so complex that they can only be described by means of intricate privacy policies that are simply not comprehensible to the average citizen.

Consent-based data protection laws have resulted in what is coined as mechanical proceduralism (read here), whereby organizations go through the mechanics of notice and consent, without any reflection on whether the relevant use of data is legitimate in the first place. In other words, the current preoccupation is with what is legal, which is distracting us from asking what is legitimate to do with data. We see this reflected in the EU’s highest court having to decide whether a pre-ticked box constitutes consent (surprise: it does not) and the European Data Protection Board feeling compelled to spell out whether cookie walls constitute “freely given” consent (surprise: they do not).

Privacy legislation needs to regain its role in determining what is and is not permissible. Instead of a legal system based on consent, we need to re-think the social contract for our digital society, by having the difficult discussion about where the red lines for data use should be rather than passing the responsibility for a fair digital society to individuals to make choices that they cannot oversee.

The US system: Notice and choice (as opposed to notice and consent)

In the United States, companies routinely require consumers to consent to the processing of their data, such as by clicking a box stating that they agree to the company’s privacy policy, although there is generally no consent requirement under U.S. law (with the exception under specific laws such as the Children’s Online Privacy Protection Act and Fair Credit Reporting Act).

The routine requesting of consent may reflect an attempt to hedge the risk of consumers challenging the privacy terms as an "unfair trade practice." The argument being that the consumer made an informed decision to accept the privacy terms as part of the transaction and that the consumer was free to reject the company’s offering and choose another. In reality, of course, consumers will have little actual choice, particularly where the competing options are limited and offer similar privacy terms.

In economic terms, we have an imperfect market where companies do not compete based on privacy given their aligned interest to acquire as much personal information of consumers as possible. This leads to a race to the bottom in terms of privacy protection. An interesting parallel here is that the EDPB recently rejected the argument that consumers would have freedom of choice in these cases. By now, U.S. privacy advocates also urge the public and private sectors to move away from consent as a privacy tool.  

Disentangling economic objectives from privacy objectives

Moving away from consent as a privacy concept is only part of the equation, however.

The proposals of the Newsom and Yang to share the wealth and grant data dividends are an economic tool: a means of giving consumers leverage to gain value from companies for the use of their data. These proposals all require that we put a "price tag" on personal information.   

However, whatever valuation and payment model, if any, might be adopted, it risks devaluing privacy protection. The data dividend concept, as well as the California Consumer Privacy Act's approach to financial incentives, each suggests that the value of a consumer’s personal information is measured by its value to the company.

This value may have little correlation with the privacy risks to the consumer. The cost to consumers may depend not only on the sensitivity of the use of their data, but also on the potential impact if their data are lost.

For example, information about a consumer’s personal proclivities may be worth only a limited dollar amount to a company, but the consumer may have been unwilling to sell that data to the company for that amount (or, potentially, for any amount). When information is lost, the personal harm or embarrassment to the individual may be much greater than the value to the company. The impact of consumers’ data being lost will also often depend on the combination of data elements. For instance, an email address is not in itself sensitive data, but in combination with a password, it becomes highly sensitive as people often use the same email/password combination to access different websites. Moreover, commentators have rightly noted that the data dividend paid to an individual is likely to be mere “peanuts,” given the vast numbers of consumers whose information is being used.

Closing thoughts

There are many reasons why societies may seek to distribute a portion of the wealth generated from personal information to the consumers who are the source and subject of this personal information. This does not lessen the need for privacy laws to protect this personal information, however. By distinguishing clearly between economic objectives and privacy objectives and moving away from consent-based models that fall short of both objectives, we can best protect consumers and their data, while still enabling companies to unlock the benefits of AI and machine learning for industry, society and consumers.

For a more detailed discussion, see our full essay, published at Future of Privacy Forum.