TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Hillary Clinton, Google, and algorithmic transparency Related reading: Algorithmic transparency: Examining from within and without

rss_feed

""

If you’re an avid fan of House of Cards, you know this past season delved into some privacy issues – my colleague Courtney wrote about them here. If you're not, I don't think it spoils anything to say one such privacy issue concerned a presidential candidate’s close ties to the world’s leading search engine. Of course, real business names were not used, but we all know the fictional company was based on Google. Essentially, the fictional search engine, Pollyhop, manipulated search results in favor of said candidate over his rival, sitting President Frank Underwood. 

Well, sometimes life imitates art.

On June 9, SourceFed posted a YouTube video claiming that Google may be manipulating search results in favor of presumptive Democratic presidential nominee Hillary Clinton. “If true,” SourceFed claimed, “this would mean that Google Searches aren’t objectively reflecting what the majority of Internet searches are actually looking for, possibly violating Google’s algorithm.”

See for yourself:

SourceFed alleges that the autocomplete function when searching “Hillary Clinton cr” does not bring up “criminal charges” like it does on rival search engines Bing and Yahoo. Similar searches for “Bernie Sanders so” brings up “socialist” on all three search engines and “Donald Trump ra” includes “racist.”

So is this true? Could this mean that the Google Search Giant is quietly trying to affect the outcome of a U.S. presidential election? 

Probably not. 

In a post for Medium, Rhea Drysdale argues that SourceFed employs shoddy methodology – they’re “just choosing random words and searching on them” – so that it generates lots of page views and, thus, ad revenue. Based on this tweet, she’s probably right:

Other journalists pointed out SourceFed's, um, you know, lack of journalism:

Some major news sites took up SourceFed's report and brought it mainstream:

One Google employee, Matt Cutts, posted several tweets detailing why SourceFed's claims were simply false, beginning with this tweet: 

But regardless of who you believe, or what candidate you prefer, for that matter, the debate highlights how difficult it is to make sense of algorithms, especially on sites that are based on personalization.

For Google Search, not only do the most popular searches count toward what a user sees for autocomplete suggestions, other factors include location of IP address and the user's previous searches (Search Engine Land featured a good post on how this feature works). On top of that, posts like SourceFed’s will also affect the search results because readers of the site will themselves conduct similar searches, something the readers likely wouldn’t have done otherwise, also something Drysdale rightly points out.

Likewise, on other personalized social media platforms, like, say, Facebook, what users see on their NewsFeed is based on their usage, friends network, likes, and countless other factors. There’s no easy way for me to tell if certain posts are given more priority than others in my feed. I personally rely on NewsFeed less these days because, no matter how often I tell it to present the feed chronologically, it eventually resets back to “Top Stories.”

Users see this kind of personalized service on countless other sites – from Amazon to Yahoo. Much of these services are great: I love that Amazon, for example, suggests books based on other books I’ve purchased. But not knowing how certain services work, the criteria built into the algorithms, on one end can cause public confusion – like the Clinton-Search debacle – or, more troubling, can affect life-altering decisions about my life.

Scholars Danielle Citron and Frank Pasqaule have long advocated for more algorithmic transparency and due process. Similarly, World Privacy Forum’s Pam Dixon and Robert Gellman issued an in-depth report called “The Scoring of America,” which includes accounts of disturbing lists that people are put on, including “cancer sufferers” and “alcoholics.” The average person has no idea they could be included on such lists or how they might get there. Congress has taken note, as has the Federal Trade Commission, and hopefully eligibility issues arising out of algorithms will continue to receive scrutiny from U.S. lawmakers and regulators.

What the Clinton-Search incident brings up, however, is the value of having some modicum of algorithmic transparency about day-to-day services. Just look at the confusion this one story brought forthUpon seeing SourceFed’s video on Hillary Clinton, it’s not a leap to assume that some voters might actually change their mind about who they’re voting for, or maybe decide not to vote at all, and in a closely contested and highly charged presidential election, this could determine the so-called “leader of the Free World.”  

Should companies provide a transparency notice about how their algorithms are being used? Though it might disclose proprietary or business information companies do not want revealed, it could also better inform the media and the public. True, the nature of some algorithms are so complex that not even the developers know how they work, but algorithmic disclosures could be done in a way that informs users about how the algorithms work in a high-level way, without giving away the secret sauce.

Doing so could improve the trust users give to a site or service, and potentially prevent inaccurate reports like SourceFed's and the potential fallout from that misinformation. 

Top image is a screen shot from SourceFed's Youtube video

Comments

If you want to comment on this post, you need to login.