TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout


The IAPP recently reviewed a set of proposals from U.S. lawmakers for a new piece of federal privacy legislation, as well as comments submitted to the National Telecommunications and Information Administration in response to their proposed framework to protect data privacy. We did this to identify areas of consensus, as well as controversy, regarding what a U.S. federal privacy law would look like. In particular, we assessed levels of support for and opposition to various provisions that may be included in a federal privacy law, such as the right to access/correct/delete data, which was supported in the majority of the comments to the NTIA, and the right to data portability, which was supported in a minority of them. (Readers can access our research on the sidebar callout on this page.)

But that piece left a key question unanswered: Which of these provisions, if enacted, would be the most effective at enhancing privacy and data protection?

For privacy professionals and others interested in such fundamental questions, several of the submitted comments are worth examining in more detail. This article thus takes a deeper dive into some of the comments submitted to the NTIA not only to reveal the challenges involved in creating good privacy and data protection laws, but also to provide insight into what steps can be taken to make privacy programs more meaningful.

The specific comments examined here include those submitted by the Center on Privacy & Technology at Georgetown Law, Electronic Frontier Foundation, Center for Democracy & Technology, Information Accountability Foundation, and a group of privacy law scholars affiliated with various universities. The key issues discussed are the inadequacy of notice and consent and accounting for newly emerging privacy harms, such as discriminatory and manipulative uses of data and harms brought about by mass surveillance.

The inadequacy of notice and consent

Throughout the comments submitted to the NTIA, notice and consent were a primary issue of discussion. While notice and consent have formed the backbone of individuals’ rights and control over their data since the advent of the “information society,” it is becoming increasingly outmoded by innovation and expansion in data collection technologies. As the CPT put it, “[c]onsent today is less meaningful than it once was.” Although it remains “important in conjunction with other measures,” as the privacy law scholars argued, consent is “insufficient on its own to protect privacy.”

The reason for this is that some of the central assumptions of the notice-and-consent regime have been uprooted by advancements in technology. Indeed, it is becoming less realistic for an individual to keep themselves informed about the myriad entities that exert some form of control over their personal data. Today, the complexity of business processes and vendor relationships “outstrips the individual’s ability to make decisions to control the use and sharing of information through active choice,” as the IAF stated.

In 2008, researchers Aleecia McDonald and Lorrie Faith Cranor, CIPT, showed that a legal regime that obliges individuals to read privacy policies imposes significant costs on society. At that time, researchers estimated that to read the privacy policy of every website a person visits would take the average person about 76 workdays per year. This estimate did not include any additional time that a person would then need to actually put that information to use (i.e., by comparing policies and making decisions about which websites to visit or avoid).

As studies like these indicate, consumers rarely have an adequate line of sight into the nature of the agreements they are consenting to when they provide their personal data to companies. Operating with the expectation that consumers know how their personal data is being used thus assumes consumers are uniformly and unequivocally capable and willing to shoulder significant burdens.

Moreover, when so much social and professional activity has shifted to digital platforms, an individual choice to share data with such providers tends to feel like anything but a free choice. As personal data holds both actual and potential value, companies, in reality, have incentives not to present their consent agreements to consumers in neutral terms. As the EFF has noted, “companies are becoming skilled at manipulating consent, steering users to share personal data.”

An additional challenge is that new technologies are emerging today that are eroding the very foundation of the notice-and-consent regime. As the IAF points out, “consent is not effective in business processes such as AI where there is no human involvement.”

The privacy law scholars analyzed several ways in which consent is meaningfully incorporated into existing regimes, referring to the EU General Data Protection Regulation and the California Consumer Privacy Act as examples worth emulating. Under these laws, consent is “clearly and affirmatively obtained, often in writing”; is “informed,” as it offers consumers “a genuine choice” in that it is not, for example, “bundled” with the acceptance of terms and conditions; allows consumers to withdraw it as easily as it is given; prohibits companies from “penalizing consumers for withholding or withdrawing” it; requires consumers to opt in (or at least easily opt out); and mandates that consumers consent to changes in data practices.

The need to comprehensively assess privacy harms

Another key issue noted in several of the comments submitted to the NTIA is that there is a greater need for lawmakers and regulators to define privacy harms from a broad perspective. As the IAF put it, “Additional work is needed to more clearly define and describe harm as it can result from violation of privacy and inappropriate use of data.”

The reason for these calls to rethink privacy harms is simply that privacy harms today are not what they were in the past. With each new platform, technology and innovation comes greater risks to individuals and consumers. Finding ways for laws and regulations to incentivize businesses to consider newly emerging risks to individuals — as well as to society as a whole — is thus of increasing importance.

For example, as the CPT also has pointed out, digital platforms have brought about new ways for technology to be used to disseminate propaganda, misinformation and disinformation, amplify hate speech, and enhance political polarization, as well as have damaging consequences for public health. For example, social media use has been associated with depressive symptoms, particularly in adolescents. (It is also worth noting, however, that some of the findings of “Facebook depression” are mixed).

Discrimination is another harm from data (mis)use that is becoming more acute today. The CPT argues that discriminatory uses of data “simply should not be allowed” and urged the NTIA to include “non-discrimination” explicitly on its list of desired privacy outcomes. The CDT similarly urged that the NTIA account for the “quickly growing risk” from “opaque and discriminatory algorithms.”

The Federal Trade Commission has similarly noted that companies’ reliance on big data raises fairness concerns. As it stated in its report, “Big Data: A Tool for Inclusion or Exclusion?” companies should “assess the factors that go into an analytics model and balance the predictive value of the model with fairness considerations.” The FTC also expressed its intention “to continue to examine and raise awareness about big data practices that could have a detrimental impact on low-income and underserved populations.” In this vein, the CDT pointed approvingly to the proposal from Sen. Ron Wyden, D-Ore., that would “direct the FTC to consider privacy injuries that involve ‘noneconomic impacts.’”

Mass surveillance is another source of harm that has increased the privacy risks to consumers by orders of magnitude. As the privacy law scholars explain, “[s]urveillance creates power imbalances, not just because knowledge is power, but because information can be used to nudge consumers toward particular behaviors without their knowledge.” This alludes specifically to the harm of manipulation individuals may suffer on account of certain types of information about them being exposed. As the IAF also urges, “manipulation of needs or desires/wants of the individual (i.e., creation of a need where one previously did not exist)” is a type of harm that lawmakers should consider when crafting privacy and data protection laws.


As proposals for a federal privacy law continue to be introduced, it remains important to consider what provisions will bring meaningful privacy protection to consumers.

The particular set of comments submitted to the NTIA examined here deserve special attention from privacy professionals. While the issues they raise resist easy solutions, privacy professionals must fully understand emerging policy concerns about consumers’ privacy rights particularly in an age when innovative data uses dominate headlines and bookshelves and where privacy expands beyond regulatory compliance to core issues of brand, reputation and consumer trust. For the foreseeable future, the insufficiency of notice and consent mechanisms and need to assess privacy harms from a broad perspective are issues that will continue to illuminate discussions of privacy law at both the federal and international levels.

These discussions will continue to command the attention of privacy pros, whose day-to-day jobs involve exploring complicated questions and thinking through the practical implications of these laws.

photo credit: Washington DC Capitol - Purple Hour HDR via photopin (license)

IAPP Westin Research

Consensus and Controversy in the Debate Over US Federal Data Privacy Legislation

In this IAPP Westin Research white paper, which is free to IAPP members, Fazlioglu examines a set of recent bills that have been introduced in Congress, including the Consumer Data Protection Act introduced by Senator Ron Wyden, D-Ore.; the Data Breach Prevention and Compensation Act of 2018 introduced by Senator Elizabeth Warren, D-Mass.; as well as the Data Care Act of 2018, proposed by Senator Schatz, D-Hawaii, in early December 2018. 

Further, she examines a selection of recommendations made in comments submitted to the National Telecommunications and Information Administration from across government, industry, and advocacy organizations in response to a set of desired privacy outcomes. 

Lastly, she identifies several areas of broad agreement as well as pointed disagreement regarding the nature, shape, and scope of a potential U.S. federal data privacy law.


If you want to comment on this post, you need to login.