In November 2020, California voters approved a new data privacy law. Unfortunately, the law contains a provision that may threaten the future of digital content for underrepresented communities. California’s new law, the California Privacy Rights Act, includes provisions that prohibit “revealing” a consumer’s racial or ethnic origin, religious or philosophical beliefs, and sex life or sexual orientation. The beneficial intent behind this provision is unassailable, but regulations need to carefully be tailored to allow digital content providers to provide free content to users interested in issues affecting people of color, LGBTQ communities and other groups. Otherwise, the very groups intended to benefit from these protections will lose access to key content affecting issues of critical importance to them.
Sensitive personal information
The CPRA is poised to usher in a sea change in U.S. data privacy. Although the CPRA addresses the data privacy rights and obligations of California consumers and organizations conducting business in California, respectively, its broad scope will impact many U.S. and international businesses and industries. Further, although nonprofits are not directly subject to the CPRA, they will be impacted by this law, as well, to the extent they receive funding and have relations with businesses governed by the CPRA. This new law will also create the nation’s first agency dedicated to consumer data privacy and new data privacy rights that limit businesses’ ability to collect, use and share personal information.
For purposes of this article, the most important provision is that these new privacy rights include the protection of consumers’ “sensitive personal information,” which is data revealing a consumer’s racial or ethnic origin, religious or philosophical ideologies, and sex life or sexual orientation. Under this new provision, although “reveal” is intended to address situations in which a consumer expressly discloses their identity and ideologies, it is anticipated that some consumer rights groups may argue that simply “revealing” one’s interest in certain demographic identities and ideologies will be treated as sensitive personal information.
The CPRA is not alone in protecting “sensitive personal information,” like racial or ethnic origin. Data privacy laws in the European Union, South Africa, Kenya, Ghana, Brazil and Nigeria have each embraced the concept. The protection of such information is a noble goal and springs from a dark history when minority groups were persecuted on a mass scale for their identities and ideologies. However, this key ambiguity in the CPRA threatens to undermine this laudable goal to protect sensitive personal information and may result in restrictions in access to digital content of the very groups who the laws were meant to protect.
Unfortunately, the CPRA’s definition of “sensitive personal information” threatens the ability of marginalized groups to access digital content and the commercial viability of content regarding such groups. As explained below, because of this new provision, businesses will begin self-censoring themselves from providing online content related to identities and ideologies to avoid dealing with the legal implications of being a handler of sensitive personal information. Further, publishers will be unable to offer such content free online if information relating to a user’s interest in such content is considered revealing one’s identity and ideologies when it plainly does not.
This is an important issue because data privacy laws should encourage rather than hinder the advancement of social justice causes, which have evolved in the modern era to embrace openly discussing issues of identity and ideology. For example, the Black Lives Matter protests over the past year have demonstrated increased attention to the as-yet unaddressed issues of social justice in marginalized communities around the world. The movement, which has now gone global, brings to the fore the struggles of the African American community. Indeed, reading about the struggles of marginalized groups prompts greater understanding, empathy and, ideally, action to address these fraught issues. The digital revolution has democratized voices from marginalized communities, with even large media companies developing brands and verticals to highlight voices from underrepresented groups. Put simply, the world has changed from masking one’s identity and ideologies to a place where, especially online, voices can join together and create, view and share content that embraces their unique identities and perspectives.
To advance this goal, one key ambiguity must be resolved to ensure the CPRA actually works for the communities it was meant to serve, namely the term “reveal.” For example, if a white male searches online regarding Latinx or African American narratives, online publishers may place this person in an interest segment for advertisements related to Latinx and African American issues. Under the CPRA, however, it is ambiguous whether this data actually reveals a user’s identity. While it seems clear that such searches do not reveal a protected identity, nonetheless businesses are understandably concerned about collecting such identity or ideology-based data about users’ interests. This is because, under the CPRA, businesses will have to disclose that they collect information that might “reveal” a consumer’s race, ethnicity, religion and sexual orientation, which may concern businesses about the optics of such a disclosure, even though they may be collecting and sharing this information for positive reasons, such as advancing a social justice cause.
Further, even though businesses may collect and share sensitive personal information for reasons beneficial for underrepresented communities, they may make a financial decision to stop doing so to avoid creating new compliance obligations implicated by collecting and disclosing sensitive information under the CPRA. In fact, this concern is real and imminent, as several social media and technology companies have already taken these drastic steps.
Ads are at the heart of the economy for free-to-the-user digital content. Online advertising takes many forms, but first-party contextual advertising is central to any digital content provider. Users might recognize these kinds of ads along the lines of “if you like this content on our site, you might like this other content on our site.”
However, for contextual advertising to function, publishers need to collect data regarding users’ interests in certain topics based on usage patterns and place them in audience segments composed of users who would be interested in viewing other content and advertisements on the same topic. If a publisher cannot determine what topics users are actually interested in, the publisher cannot show that content on these topics is in fact attracting readers. If the content is not attracting readers, it is presumed to be non-revenue-generating.
For example, a business may decide to work with a charity to promote a diversity initiative. As part of this initiative, the business may wish to place ads with an online publisher to collect donations for the charity and target the ads to an audience segment composed of individuals who have expressed interest in the charity’s target diversity causes. However, if online publishers decide to no longer maintain such audience segments because it may be considered a collection of sensitive personal information, the business promoting the diversity initiative will not be able to focus its ad campaign on the users most likely to be interested in the initiative.
The business likely will spend more money with the publisher for less user engagement, and ultimately, collect fewer donations for the charity.
In another example, an online publisher may decide to make available ad space on its website to pay for a diversity initiative it would like to launch. To attract advertisers who will buy the ad space, the ads will focus on users of its website who have previously shown interest in reading content on diversity topics, such as news stories about people of color (for example, by targeting users who have read multiple articles on these topics within a certain period of time). Without this direct correlation between user interest in certain topics and the ads, it is hard to put a convincing argument before advertisers that more of this type of content will attract users and thus mean the users will see the advertisers’ ads.
However, if tracking the readers’ interests in issues related to identity or ideologies in this scenario is considered sensitive personal information, then the publisher may decide to forego launching this diversity initiative and instead sell the ad space for campaigns focusing on other topics. This is because the publisher’s use of the ad space for this initiative may require it to update its privacy notice to state that it is a collector of sensitive personal information and allow its users to limit the publisher’s use of their sensitive personal information. Further, although the publisher’s reason for collecting the users’ data in this way is socially beneficial because it supports the publisher’s creation of diversity topic content, the publisher may self-censor to avoid problematic optics of being a collector of sensitive personal information, which advertisers may view as attracting potential negative public relations.
In short, to avoid dealing with the potential risk of litigation and enforcement actions because of the ambiguity regarding “reveal,” online publishers may avoid creating, selling and/or using audience segments composed of individuals interested in issues impacting people of color and other historically underrepresented groups. Businesses may also self-censor themselves and not wish to work with any publisher who collects, uses or shares any information pertaining to issues of “sensitive personal information.”
Thus, the CPRA could stifle speech related to identity and ideologies and hinder the publication of content related to social justice issues.
If consumers decide to exercise their right to limit or opt-out of businesses using and disclosing their sensitive personal information, then many publishers will likely become unable to offer digital content for free online, particularly content pertaining to issues of identity and diverse voices.
Consequently, economically disadvantaged communities will lose access to, among other things, valuable educational content regarding identity and ideologies, information regarding how they can get involved in social justice causes, and a medium to express their thoughts on these issues. Given the vast reach of digital content across the globe, it is also possible a shock to the current ecosystem of digital content production could disrupt platforms used to promote global development. For example, platforms might be reticent to collect audience segments composed of online users that have expressed interest in their local cultural wares from developing countries to avoid “revealing” users’ racial, ethnic or national origin.
Further, changing the free-to-the-user digital model based on advertising revenue will likely not solve this problem. For example, an online subscription newsletter — which economically disadvantaged communities are less likely to use — cannot know from which readers to obtain opt-in consent to send content related to identity and ideologies if the collection of usage data related to such interests is less available because of concern that it may constitute sensitive personal information. Thus, even under this alternative model, authors will be unable to reach individuals interested in social justice issues with pertinent content.
To mitigate these concerns, regulations should be promulgated clarifying that businesses’ obligations regarding the collection, use and sharing of “sensitive personal information” are triggered only if consumers explicitly disclose their identity or ideologies, such as via survey questions. If businesses merely collect consumers’ usage data on their own website (i.e., first-party contextual data) related to the users’ interest in issues related to identity and ideologies to create audience segments, they are not collecting, using or disclosing sensitive personal information under the CPRA.
Again, merely expressing an interest in topics related to race, ethnicity, religion, philosophical beliefs, gender and/or sexual orientation does not mean that the user is a member of that group or shares similar ideologies.
Simply put, a user’s first-party contextual data does not infer characteristics about the user’s identity or ideologies; rather, it indicates the user’s interests in those topics. By adopting regulations clarifying this issue, online publishers can continue to offer online content related to identity, ideologies and social justice issues to users for free by collecting and monetizing data related to an individual user’s interests in such topics, which does not reveal the user’s actual identity or ideology.
Without new clarifying regulations aimed at resolving this ambiguity, the free-to-the-user system of digital content is threatened.
With special thanks to Ashley Tan for her contribution to this article.
Photo by Colin Sabatier on Unsplash
This is a 10-part series intended to help privacy professionals understand the operational impacts of the California Privacy Rights Act, including how it amends the current rights and obligations established by the CCPA.
This book aims to help the person who is leading a business’s California Consumer Privacy Act efforts so they can have a handle on what is necessary to comply and make risk-based choices about how best to proceed.
If you want to comment on this post, you need to login.