TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Putting Google Glass on Ann Landers Related reading: US NTIA releases AI Accountability Policy Report

rss_feed

""

5-4. No, that’s not the vote count in a partisan Supreme Court decision or the score of a Major League Baseball Spring Training game.

It’s the number of “Dos” versus “Don’ts” on Google’s recently published blog post on the public implementation of the always-controversial Google Glass.

If you’re a Glass user, you might want to read it so you don’t get attacked:

Sarah Slocum, a contributor at Newsdab, arrived at Molotov's, which Google Maps describes as a "laid-back bar with daily happy hour," early Saturday morning around 1:45 a.m., according to her YouTube page. Shortly after entering the bar, she said she was verbally and physically attacked by "Google Glass haters." One of the assaulters went so far as to snatch the connected headset off her face, although Slocum managed to recover it.

Perhaps Slocum hasn’t been introduced to the old adage that nothing good happens after midnight, but it’s pretty clear that Glass has become a charged subject.

All this even though Google has been cautious about introducing its wearable smart glasses. Though they’ve been around for what seems like years now (I think it’s been since 2012), they’ve been slowly integrating them into the public: First a select crew, and then friends of the crew or what Google calls its Explorer community. The Privacy Ref wrote about his experience joining the club here.

With Glass, we’re treading on new ground, making us confront a new era of social norms. Whether we’re concerned about privacy, the “creepy” factor or we’re simply being a reactionary, Glass is just the beginning of a new wearable tech landscape.

But is this really the beginning of an era where companies must include a set of social instructions with a new product?

Time will tell—my guess is no—but, in the meantime, let’s take a look at what the Google Explorer community had to say.

The “Dos” are pretty self-evident. Explore the world (use it); use voice command (I’d imagine you’d have to); use the screen lock (security first!); be a Google Glass advocate (advertise it to your friends and coworkers and everyone around you, pleeeease). And perhaps most importantly, ask permission to use it—which is kind of a don’t: Like, don’t stand in the corner of a party with them on while being creepy.

Seriously. Please don’t do that.

If you’re going to read War and Peace, the Explorer community points out, don’t do it with Glass (I do recommend the book though). “If you find yourself staring off into the prism* for long periods of time, you’re probably looking pretty weird to the people around you,” the community adds.

I can imagine a whole slew of Google Glass “don’t” signs. Don’t wear while riding a bull or cage-fighting Napoleon Dynamite’s brother, Kip.

And for facilities management, there’s already a website dedicated to selling “No to” Google Glass signs.

Really the number three and four “don’ts” are one and the same thing: Don’t expect to be ignored, and when people ask you about them, “don’t get snappy.” For better or worse, wearing Glass makes you—for now, at least—a tech celebrity, so get used to it, and if you don’t want to deal with it, take ‘em off.

As Glass, and other wearables, become more common, employers—and privacy pros—will have a new set of BYOD challenges to face. V3.co.uk reported on BYOWD in the European context. Will the eventual data protection regulation be ready to deal with such tech? Any BYOD benefits and challenges, beyond the European context, are going to have to integrate with wearables.

And what about social norms in the long term?

Some, including Profs. Woodrow Hartzog and Evan Selinger, have written about the democratization of Big Data. What happens once—and it’s already starting—surveillance and Big Data analytics tech gets into the hands of the average person? They question whether the law is prepared to handle that.

And Jonathan Zittrain put it this way:

“What happens when the information being gathered about us is thanks to someone wearing a headset and simply streaming anything interesting that he or she sees, helpfully auto-tagged with our identities? Some bars and restaurants may try to ban Google Glass on the way in, but lessons from anything ranging from mobile phones to hats tell us who’s going to win that war in the longer term. Especially once the distribution of streaming devices has evened out, so it’s not just the occasional freak behaving anti-socially, but all of us doing so, we’ll need to look for other solutions if we don’t want to be stuck simply having to reconcile ourselves to no private moments in public.”

He proposes, rather, a kind of anonymous, democratic, contextual approach to digital rights management (DRM). Though DRM hasn’t necessarily worked with movies and music, he concedes, perhaps we can beam out a “don’t copy me” request in a given situation, such as a classroom. If the given group votes no to being recorded, devices in the area will present a warning stating the group vote.

He goes further,

“Those recorded will be able to have some form of pseudonymous contact information embedded in the recording—so that if it should become public, they can choose to show that they were indeed the ones recorded (again without necessarily having to reveal identity) and then ask—not demand some privilege in contextualizing or commenting upon the recording … The function of the technology is not to impede certain uses by fiat—the way the old DRM did—but rather to allow people to see that other people are implicated by what they do, permitting the moral dimension of our enthusiastic use of technology to become more apparent.”

And PlaceAvoider seems to be in line with this thinking. MIT’s Technology Review recently reported on the software. Developed by computer scientists at Indiana University, the technology identifies potentially confidential or embarrassing images and prevents them from being shared. Since what constitutes an embarrassing image is relative to the user, the technology prompts the user to train the software by taking photos of rooms they want blacklisted. For more on how the tech works, I recommend reading the MIT article.

Adam Thierer, a senior research fellow at George Mason University, points out that the camera is what prompted Warren and Brandeis’ The Right to Privacy, and that, over time, people adjusted to changes in social norms, in part, by purchasing their own cameras. We may very well be on the cusp of a similar wave, but it’s nice to know that technological solutions to privacy protection are being developed.

In the meantime, and until the law grapples with technology such as wearables, we’ll likely have panicked situations, social do–and-don’t lists and, hopefully, some common decency and understanding among the general population.

*interesting word choice

photo credit: tedeytan via photopin cc

Comments

If you want to comment on this post, you need to login.