TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Facial recognition technology: Should faceprints be considered personally identifiable information? Related reading: Evolving privacy law 'exciting' for IAPP Westin Scholar

rss_feed

""

""

A little more than two decades ago, the idea that technology could be capable of recognizing an individual’s face was merely the stuff of science fiction. Yet, at the dawn of a new decade, facial recognition technology has not only become a reality, it is becoming commonplace—from security surveillance to social media photo tagging.

Like the tips of our fingers, our face, when coupled with the appropriate algorithm and database, becomes a unique biometric identifier. Yet, unlike fingerprints and other biometric identifiers, a faceprint can be captured from a distance without an individual's knowledge.

Experts discussed the implications of this technology last December at a roundtable hosted by the Federal Trade Commission (FTC). The forum made it clear that the new technology creates a vast array of opportunities for business and law enforcement but also raises concerns about privacy and anonymity.

One expert suggests faceprints be considered personally identifiable information.

This seemingly simple assertion to what has been considered a complex problem was posed by one of the inventors of facial recognition technology. As vice chairman of the International Biometric & Identification Association (IBIA)—a biometrics trade association—Joseph Atick has spent the last 20-plus years leading several companies in the identity management industry.

Atick says it’s critical for individuals to have control over their faceprints. A complex system of assigning fair information practices to varying facial recognition applications “can be resolved and addressed if you give the consumer the control over the faceprint and say no application can exploit the faceprint without (the consumer’s) explicit consent.”

In other words, our faces should be copyrighted.

Atick says despite the vast deployment of the technology, a generic approach such as legislating that a faceprint is PII gives weight to responsible use and makes companies liable if they do not appropriately protect faceprints—similar to the practices for health or financial records.

“If we limit regulations to a faceprint equals PII, then industry self-regulates to police that. It creates liability and allows the legal system to do the enforcement,” says Atick.

“Privacy advocates on the (FTC) panel talked about a careful tiered approach to privacy,” says Pam Dixon, but Atick pointed out that it is difficult to police facial recognition. Dixon is an author, researcher, and founder of the World Privacy Forum. She wrote a report on digital signage and facial recognition technology, The One-Way Mirror Society, and has led an effort to create the first set of collaborative consumer privacy principles for digital signage.

Where is the technology being used? Is it collecting and storing faceprints? This can be solved by making faceprint processing a liability when misused.

“What Dr. Atick was saying was a wake-up call.”

“He’s not a privacy advocate,” she says. “He was there to advocate for companies using it. So for him to say that was stunning.”

In its comments to the FTC, the World Privacy Forum wrote, “Dr. Atick’s approach provides an important avenue of thinking that we urge the FTC to explore further. We believe it holds significant promise and has the most potential for a positive and fair outcome. The policy dialogue around facial recognition and detection technologies has been overlaid by approaches with roots in past technologies from past eras. Much of the policy discussions to date have not been informed by Dr. Atick’s level of knowledge and, as such, have not taken into sufficient account the uniqueness of the faceprint, the nature of the technology and the manner in which it is being deployed.”

Dan Solove agrees that faceprints should be considered PII. “Faceprints are specifically designed to be identifiable to a person and to be a means of identifying a person, and thus I believe they should be considered PII.”

As John Marshall Harlan Research Professor of Law at George Washington University Law School, Solove is considered to be one of the world’s leading experts in privacy law. He says a “combination of self-regulation as well as legislation and agency supervision” will be the best solution. “Self-regulation alone lacks the teeth and uniformity to be effective, but when combined with reasonable and flexible legislation, self-regulation can be effective in many ways.”

The Software & Information Industry Association (SIIA) has a different take on privacy legislation. In its comments to the FTC, the SIIA asserts that “there is no need to develop specialized privacy principles for facial recognition or facial detection technologies.”

“Privacy is context-dependent, not technology-specific,” the SIIA contends.

SIIA Public Policy Vice President Mark MacCarthy says facial recognition and detection technologies “raise different privacy issues depending on the context in which they are used.” Digital signage that does not collect faceprints “might call for notice,” while “more advanced use of facial recognition technology such as tagging pictures on a social network might call for some kind of consent. There doesn’t need to be legislation specific to faceprints,” he contends, “because the technology can be used in so many different contexts it would be impossible to write meaningful privacy protections.”

MacCarthy also says, “In the contexts where the technology is being used now, such as digital signage or tagging photos on social networks, the industry has struck a pretty good balance.”

Speaking at the FTC roundtable, Facebook Director of Privacy Erin Egan raised the notion of context as well. The social networking site came under fire from privacy advocates when it introduced facial recognition software to streamline photo tagging.

“We’re not automatically identifying you,” Egan said. “We’re just suggesting people who might be in your photos so we can make tagging easier for you. I mean, that’s the context…even in that context, there are important principles around notice, around control, around security…I still think these framework principles apply, but again, I think that when we look at how they should be applied, it should depend on the context and users expectations.”

Last October, Atick published Face Recognition in the Era of the Cloud and Social Media: Is it Time to Hit the Panic Button? In it, Atick describes the “perfect storm” that gives him reason for concern. The combination of “enthusiastic participation in social media,” increased use of digital cameras and improved facial recognition algorithms “opens the door for potentially achieving the unthinkable: the linking of online and offline identities.”

Dixon questions whether Pandora is already out of the box regarding facial recognition technology. “We proceeded down a path unconsciously because the market is already capitalizing.” Dixon has spent time in India for the government’s biometric identification system and in Japan during the implementation of its smart grid. “One thing I’ve learned about biometrics: once it’s out there, the game is up. You can’t take it back.”

She points out that the Fair Credit Reporting Act and the Equal Credit Opportunity Act were both pieces of U.S. legislation that rolled back discriminatory market practices to protect consumer rights. Dixon thinks something similar could help roll back current market practices to help protect individual identities.

Dixon thinks these new technologies call for a new way of thinking about our privacy rights. When in public, our expectation of privacy is naturally diminished, but in the past, we’ve often had some form of anonymity. As facial recognition becomes more ubiquitous in the public space, our expectation of privacy and anonymity could be eradicated, she warns.

The recent U.S. Supreme Court ruling on GPS tracking, the United States v. Jones, sheds light on the new privacy paradigm for Dixon. In particular, Dixon cites Justice Sonia Sotomayor’s concurring opinion. “More fundamentally,” wrote Sotomayor, “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties…This approach is ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.”

Dixon says Sotomayor’s concurring opinion “forms the threshold of what we need to be looking at.”

In comments to the FTC, the Electronic Privacy Information Center (EPIC) recently called for a moratorium on commercial deployment of facial recognition technology. “While the use of facial recognition technology has increased over the past several years, few legal safeguards currently protect consumer privacy and security.”

EPIC cites two existing state laws--in Illinois and Texas--that have biometric statutes in effect but adds that the U.S. Constitution “only protects individuals from privacy intrusion by the state, not private companies or other individuals.” Internationally, the new EU data protection framework mandates any organization processing biometric data needs to conduct a “data protection impact assessment” as well as meet personal data obligations. As such, EPIC is asking the FTC to require companies “collecting, handling, storing and transmitting” biometric data “to adhere to a framework of Fair Information Practices.”

When asked if current EU and UK legislation appropriately protects individuals from facial recognition’s misuse, a spokesman from the UK Information Commissioner’s Office said, “Images of individuals are likely to be regarded as personal data and therefore UK laws require organisations to have a legitimate purpose for processing this information. In some cases this may be in the legitimate business interest of the organisation but in others this will require the consent of the individual. UK and EU law is clear that consent must be both freely given and informed. Other aspects of the Data Protection Act such as subject access and the right to object to processing will also apply to facial recognition technologies.”

For Atick, the solution resides in legislation requiring faceprints to be considered PII. Once this is done, and liability becomes a driving factor for companies to ensure it is protected, then industry can self-regulate. He says facial recognition technology is helpful for fighting crime and terrorism and for other security needs. It has proven helpful in Ontario, Canada, for problem gamblers who opt in to the Ontario Lottery Gaming Corporation’s voluntary self-exclusion program.

Atick says the industry has been “fully behind” the IBIA, but adds, it is concerned about “rogue applications” that use the technology “to make a name for themselves.” He says many of the newer social media sites and Internet-based companies are exploiting facial recognition technology. “We’ve reached out and asked them to use the technology responsibly.”

The biggest concern, according to Atick, is the construction of large databases of faceprints. “We need to address the root cause of this threat to privacy by focusing on the ease with which identification databases can be built through automated harvesting of identity-tagged images over the web,” he writes. Changing the attitudes of data handlers—like social media and image sharing sites—and elevating faceprints to the level of PII can help.

Comments

If you want to comment on this post, you need to login.