Bots are nothing new. They’ve been around for a while, and they’re certainly not going away. But as we continuously socialize ourselves online, the creep factor inches forward.
Take, for example, the latest spambots to hit Instagram.
To avoid spam filters, these types of automated software applications are essentially stealing legitimate users’ identities. Christian Mazza, a video director for The Verge, recently noticed one such copycat account, complete with his very own profile photo and caption. Something similar, too, happened to Mazza’s girlfriend’s account.
What’s more, these bots are not trying to steal accounts or money, but by stealing personalized images, they’re likely part of the black market for fake social media followers.
Well, The Verge explains, “Buyers may be minor celebrities; professional photographers who think having more followers will get them more business; social media consultants who want to look like they know what they’re talking about, or people just looking for an ego boost.” Hollow reasons, indeed, but creepy nonetheless.
Five years ago, Scientific American reported on a similar phenomenon with blogs. These so-called “splogs” allowed advertisers and other unethical bad actors to fill seemingly real blogs—copied from legitimate blogs—with advertisements and to boost their own search results.
And anytime someone’s identity is taken or borrowed or simply mirrored, the potential harm increases.
Maybe it’s hard to quantify, but it’s hard to deny that it’s real. Is it better or worse if it’s done by an automated program? Say, for example, auto-generated messages about getting married. Pinterest mistakenly sent thousands of emails to congratulate users on their upcoming nuptials. The problem? Well, many of those sent the email were not actually getting married.
Maybe you’ve known some people who are sensitive about that kind of thing?
A spokesman said the incident was not due to a technical mistake—so perhaps we won’t chalk this one up to poorly designed algorithms or bots but rather a poorly written email.
Here’s what Pinterest’s head of communications said: “Every week, we email collections of category-specific pins and boards to pinners we hope will be interested in them … Unfortunately, one of these recent emails suggested that pinners were actually getting married rather than just potentially interested in wedding-related content. We’re sorry we came off like an overbearing mother who’s always asking when you’ll settle down with a nice boy or girl.”
The humor is a nice touch, but Elahe Izadi of The Washington Post, wrote, “Wait, I’m getting married? Have I been in a coma for the past five years? Pinterest, do you have access to a crystal ball or some sort of time-travel technology? And if so—not to sound judgmental—using it for planning nonexistent upcoming weddings seems like a major waste of future-seeing resources.”
Though, ultimately, the harm was limited, utilizing automated systems, though incredibly efficient, runs the risk of propagating major creep factors and corroding trust among loyal customers. And simple corporate embarrassment.
But not all things bot-related are bad. There are good bots out there, too. Not to mention helpful, funny and sometimes flat out immature. For example, there is a collection of Twitter handles along fault lines in California designed to warn and inform followers of earthquakes.
— SF QuakeBot (@earthquakesSF) August 24, 2014
There’s also the @YesYoureRacist Twitter handle that automatically retweets posts that include the language, “I’m not racist, but …” Likewise, other bots have followed suit, including @YesYoureSexist and @YesYoureHomophobic.
Many of us would laud this bringing to light of unsavory communication, but I wonder if these bots are in some ways violating the privacy of Twitterers who might only have a few followers and who would otherwise be obfuscated by the vast number of tweets into whose abyss these moronic (and sometimes hateful) comments would otherwise slip completely unnoticed.
Of course, bots can also be funny. Any time someone tweets “over 9000,” this bot replies with,
@harrysneko WHAT?! NINE THOUSAND?!
— Nappa (@DBZNappa) August 13, 2014
Or juvenile, like @fart_robot.
And then there’s times when two bots interact.
The Atlantic recently wrote about such an incident and what it means for the future of machine-to-machine communication. Two homemade bots, one representing a five-year-old girl, the other a fake account of film producer Keith Calder, started a dialogue, which, because of certain key words, brought the Bank of America corporate twitter bot into the fold.
Though there was no substance to the botversation, Alexis Madrigal pointed out that “each of us is going to have more and more bots acting on our behalf as well as trying to get our attention. What we see working on Twitter will soon spring from the computer and begin acting in the world.”
We’re already seeing such information flow control in our Facebook—and soon, according to the latest reports, we’ll see similar algorithmicized flows in Twitter’s news feed (boooooo!)—and Gmail’s Priority Inbox.
As the realm of Internet of Things continues to take hold in our day-to-day lives, and as more and more machines automatically communicate to one another, we’re going to more heavily rely upon those services. Madrigal calls it the “microbotome.”
I’m not sure if that will catch on, but we’re going to be seeing more of these increasingly intelligent artificial intelligences, and as businesses continue to utilize automated services, privacy pros will have a role to play in making sure that social norms and privacy expectations aren’t violated or ignored, either by the business using the services or the bad actors trying to manipulate them.
If you want to comment on this post, you need to login.