Last month in San Francisco, tech leaders and CEOs met with federal regulators to do something rather unexpected: facilitate the use of new technology in the democratic process. Along the way, participants gave serious thought on how to build safeguards to ensure the protection of consumer privacy.
Inspiration for the event came from conversations Federal Elections Commission (FEC) Chair Ann Ravel, in which she shared with me her fervent belief that new technology has the potential to bring more people into the political system at a time when voter participation rates are in decline. She also suggested that many policy-makers didn’t quite grasp the vast potential for technology, politics and the law to join together in this space. And further, we realized that these new technologies and the trending use of big data analytics necessarily brought up concerns for consumer privacy protection.
Such was the genesis of “The Future of Technology & Democracy,” a conference where technologists—some from Google, Facebook and Twitter—gathered to educate and inform representatives of the FEC and the Federal Communications Commission (FCC). The all-day event touched upon new personalized targeting models in campaign advertising and fundraising, the potential for blockchain technology to facilitate real-time contribution transparency and the reinvention of a public square that has long left behind the confines of traditional media for the social platforms of online service providers.
Significantly, a common thread throughout the wide-ranging discussion was privacy: privacy for voters, privacy in civic discourse and privacy in the use of big data analytics, predictive modeling and targeting.
With that in mind, here are a few takeaways for privacy professionals from this groundbreaking event:
- Public Datasets Are Becoming More "Public" and Less "Private
Government-released datasets are already “public” in terms of binary availability, but to the extent that technology enables ever-increasing ease of accessibility to that data, public datasets provided by the government are arguably becoming more “public” and less “private” for the average citizen. Phrased differently, for better or for worse, government-released datasets promote a more transparent citizenry.
Public datasets are routinely being data mined and combined with purchased and proprietary datasets, e.g., consumer data and details provided to Facebook and other social media. These new data sets are capable of extremely narrow, and even individual, targeting that can be effectively leveraged by political campaigns and others.
- Predictive Behavioral Targeting Works (for Campaign and Campaign Fundraising)
Using tech to sway the public is no longer the stuff of science fiction.
In 2014, CampaignGrid, a consultancy that worked on marijuana legalization efforts, built a national database that scored every registered voter for likelihood to support marijuana legalization. CampaignGrid’s president, Jordan Lieberman, noted that this allowed not only differential targeting to persuade the undecided but also “get out the vote” advertising to segments that were interested but less likely to vote. But, if you had a high opposition score, you heard nothing. CampaignGrid’s proprietary database, which pulled in and processed personal information from numerous other third-party databases, was quite successful—resulting in marijuana legalization in Oregon and Alaska and a close race in Florida.
For the most part, privacy professionals realize that databases compiling troves of personal information exist and will continue to increase in number and complexity. Yet, for citizens and consumers, the idea that individual political targeting is realistically possible is of great concern where the traditional notice and consent quid pro quo of consumer information for (discounted) service no longer applies.
Fortunately, institutional policy can be used as a lever to help protect privacy. For instance, Facebook’s Chase Mohney explained that Facebook requires that advertising target groups be a minimum of 1,000 to 10,000 persons, depending on various factors.
- The Theory of Creepy Is Alive and Well
Nate Thames, executive director of ActBlue, told a story that illustrated how privacy and the theory of creepy continue to be alive and well. His campaign tech consultancy tried to reduce micro-donation friction by pre-filling ZIP code information based on IP address. The unexpected result? A substantial drop-off in donations. Users were just weirded out.
And that’s just the tip of the iceberg.
In a 2012 poll, 86 percent of respondents didn’t want political campaigns to tailor advertisements to them. And once people discovered that campaigns were already doing so, large majorities said they were less inclined to vote for those politicians, even when they previously supported them.
Facebook likes, online and retail purchases, consumer reports, grocery preferences, toilet paper brand, books purchased and even vacations taken all factor into the sophisticated data mining that has become de rigueur for modern campaigns.
All of this data, all of this personal information, is largely unregulated. That’s one spot where privacy pros can step in.
Privacy Professionals Are Needed To Safeguard Privacy and Trust
In the consumer space, privacy professionals safeguard user privacy through best practices that often go above and beyond the floor set by law and regulation. Privacy best practices are crucial to building and maintaining that elusive competitive advantage known simply as trust.
There is vast potential for technology, politics and the law to innovate upon the current political process, but there is also considerable risk to consumer and voter privacy. Privacy pros must become intimately aware of current campaign advertising, communications and targeting paradigms, build internal consensus and take action to safeguard privacy and create trust. Only then will innovation in the political space become a real possibility.
If you want to comment on this post, you need to login.