In the constant churn of national and international news, it’s easy for important items to get overlooked — even when the story is one that helps explain how we find the very information we’re seeing. Almost lost in the news cycle Sept. 23 was a report detailing the extraordinary extent to which Facebook is amplifying false and misleading content relating to the 2020 U.S. presidential election. This time, the disinformation isn’t coming from abroad: U.S.-based super PACs — the organizations responsible for so much political advertising content — are the ones that were flooding Facebook users’ social media feeds with false and misleading political ads. 

This latest news further underscores the findings of academic and think-tank researchers, government agencies, bipartisan investigations, and even Facebook’s own internal reviews and oversight board about the toxic effects of social media on politics. 

Taken together, they lead to an inescapable conclusion: Facebook’s reach and detailed user profiles are being exploited in harmful and sometimes anti-democratic ways.

With more than 2 billion users worldwide, Facebook has unprecedented reach, and that reach has created dominance in a stunning array of contexts: small businesses that rely on Facebook to find customers; religious congregations that livestream services during the pandemic; content creators who use Facebook to create visibility for their work; game developers who use Facebook to attract customers; news outlets whose content is shared on the platform; and, yes, people who connect over cat videos, birthday greetings, and pictures of families and friends. Facebook’s market dominance in social media is undeniable, and its aggressive purchase of potentially competing technologies — as diverse as virtual reality headset maker Occulus to messaging app WhatsApp, photography-centric platform Instagram, and analytic tools, like CrowdTangle, that researchers use to study user activity trends on the platform — has put it under the microscope of antitrust regulators in the U.S. and Europe

In the aftermath of the Russian government’s interference with the 2016 U.S. election, a great deal of energy was focused on understanding the ways that social media can be used to amplify propaganda and misleading content. The conclusions were undeniable: from the Mueller report to the Senate Intelligence Committee’s report, from the U.K. House of Commons Digital Culture Media and Sport report to the Australian Competition and Consumer Commission digital platforms inquiry report, from the work of researchers at Oxford University’s Internet Institute to reports from private companies, like Graphika, the fundamental findings have been reinforced time and again: Facebook’s platform is leveraged in countless ways and to great effect to spread political messages, including propaganda, misinformation and messaging from “inauthentic” accounts (i.e., imposter accounts whose content is posted by people or organizations pretending to be something they’re not). 

No matter how many times these findings have been replicated in different studies, the impact remains shocking and is perhaps best summarized in a single finding: Facebook’s own research concluded that 64% of people who joined extremist groups on the platform did so because Facebook’s own algorithms recommended the content. Facebook’s own former executives have testified that they designed the platform to be addictive — to maximize Facebook’s profits by maximizing user engagement.

There’s been extensive research and reporting suggesting that Facebook’s 2017 shift, re-tooling its algorithms to prioritize content in and from groups, hasn’t helped; if anything, these algorithmic changes may have accelerated the spread of extremist content on Facebook. The result: Online extremism is spilling off the screen and into the streets, as QAnon believers run for office and Boogaloo Bois show up with guns at protests.

Despite the widespread impact of social media, the intersection between the privacy of personal data and the platform’s impact remains frequently misunderstood. 

I was recently part of an online law faculty forum discussing content moderation — the mechanisms that platforms use in deciding whether to remove posts that violate their terms of service. Several of the participants repeatedly suggested that social media platforms should be treated, under the law, in the same way as webmail providers: After all, they said, isn’t what shows up in your social media just like what a friend emails you? The answer, of course, is a resounding “no.”  

Platforms like Facebook harness the extraordinary power of detailed personal profiles to gauge what content they think will be enticing for a particular user and calibrate the user’s feed accordingly so the platform can maximize the amount of time that user stays online. The result is that what appears in our feeds is informed to some degree by what our friends post, and it is shaped to a much larger degree by the platform algorithms’ automated selection of content to prioritize, based on the algorithms’ assessment of who we are and what will keep us engaged — which usually means fearful or enraged — online. 

This point is critical to understanding the intersection between data privacy, personal autonomy and democracy. 

There are multiple paths to pursue in rebalancing the relationship between social media platforms, individuals and democracy, and all of them are important. As Congress considers data privacy legislation; as federal, state and international regulators undertake consumer protection and antitrust reviews; as courts consider what constitutes an “actual injury” for purposes of standing to sue; as individuals consider how to question the veracity of the information they encounter online; and as societies assess how to empower their residents — especially older people, who studies show are most susceptible to disinformation — with the critical thinking skills they need for sorting through the slew of online content; in all of these efforts, understanding how the technology works and how personal information lies at the crux of the problem, is essential to moving toward workable solutions that can properly balance all of the many valid and sometimes competing interests, from freedom of speech to consumer choice, business opportunity and technology innovation, and more.

As all these efforts to understand and manage the impact of social media platforms unfold, IAPP members are particularly well suited to be both at the forefront and in the center of informed, constructive conversations on how to address these challenges: how to preserve the most beneficial impacts of social media platforms while protecting individuals and society at large from the platform’s most harmful impacts. 

Privacy professionals — lawyers, privacy officers, compliance officers, privacy engineers, privacy technology program managers — frequently have a better understanding than anyone else in the room of how the technology really works (as witnessed by that online discussion with law faculty who didn’t fully grasp the difference between social media and webmail); we’re often asked to help guide privacy design; we’re frequently called on to address privacy concerns when they’ve been identified after the fact; and we regularly get the opportunity to help calibrate the moral compass of an organization: to help the key decision-makers ensure that data ethics are incorporated into decision-making alongside business motivation, technological feasibility, legal permissibility and other factors being considered in the everyday decisions about data-driven technologies.

In that context, the extensive public reports, research and investigations into Facebook provide countless opportunities for all us to assess the decisions we’re making, the guidance we’re offering, and the values that are informing those in our everyday work.

Returning to the immediate question: The latest news about Facebook’s spread of false and misleading political ads brings us back to the old Silicon Valley adage: If you’re not paying for the product, you are the product. 

Now, it seems, it’s democracy that’s going to the highest bidder.

April Falcon Doss chairs the cybersecurity and privacy practice of law firm Saul Ewing Arnstein & Lehr and served as the senior minority counsel for the Russia investigation in the U.S. Senate Select Committee on Intelligence. She’s the author of “Cyber Privacy: Who Has Your Data and Why You Should Care,” available in bookstores and online in October 2020.

Photo by Rodion Kutsaev on Unsplash