TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | How should we regulate facial-recognition technology? Related reading: Illinois Supreme Court rules against Six Flags in BIPA case

rss_feed

""

With new technologies, the timeline from “that’s freaky!” to “ho-hum” is rapidly getting compressed. That’s certainly proving true of facial-recognition technology.

In the film “Minority Report,” released in 2002, Director Steven Spielberg envisioned a world (set in 2054) in which every individual would be instantly recognized (and marketed to) in public spaces, such as shopping malls. At the time, it was truly science fiction. Today, the Chinese government has embarked on a project to match faces to its database of 1.3 billion ID photos in seconds, with a target of achieving 90 percent accuracy. And many of us are routinely using Apple’s Face ID (made available in 2017 with the release of the iPhone X) to unlock our phones, or Facebook’s “tag suggestions” feature to identify our friends in photos.

(Note to film buffs: We acknowledge that “Minority Report” involved retinal scans rather than facial-recognition technology  allowing the hero John Anderson, played by Tom Cruise, to escape detection via an eyeball transplant  but the principle is the same.)

The privacy concerns with facial-recognition technology are obvious: Nothing is more “personal” than one’s face. So how is the processing of facial data regulated, whether such data is collected by a government agency as in China or by a private entity like Apple or Facebook? And as facial-recognition technology use becomes more pervasive (as widely predicted), what restrictions are appropriate in the future?

In this article, we first look at what current data protection laws have to say about the use of facial-recognition technology, with a specific focus on U.S. and EU law. We then consider the future of facial-recognition technology regulation.

US law 

U.S. law addressing facial-recognition technology is a patchworkand a small patchwork, at that.

No federal law regulates the collection of biometric data, including facial-recognition data. At the state-law level, three states (Illinois, Washington and Texas) have passed biometric legislation, but the Washington statute doesn’t specifically encompass facial-recognition technology. The Illinois Biometric Information Privacy Act, which alone includes a private right of action, is “the gold standard for biometric privacy protection nationwide” in the view of the Electronic Frontier Foundation.

BIPA defines a “biometric identifier” to include a scan of “face geometry,” but specifically excludes photographs. An ancillary defined term under BIPA is “biometric information,” which is information “based on an individual’s biometric identifier used to identify an individual.”

Unlike the EU’s data protection laws (discussed below), BIPA doesn’t begin from a position of prohibiting the use of biometric data but rather puts guardrails around how it’s used. Specifically:

  • Private entities collecting biometric identifiers or biometric information must have a written policy, made publicly available, that sets a retention schedule for destroying such information when the initial purpose of collection has been satisfied or within three years of the individual’s last interaction with the private entity, whichever occurs first.
  • When an individual’s biometric identifier or biometric information is first collected, the individual must: be informed that the biometric data is being collected, be told of the “purpose and length of term” of the biometric data collection, and provide a written release.
  • Biometric data can’t be sold, and it can’t be disclosed unless the individual concerned consents to the disclosure (or certain other exceptional circumstances apply – e.g., required by law).

No surprise that BIPA, with its private right of action, has been the subject of numerous lawsuits, including several focused on facial-recognition technology. Most notably on the facial-recognition technology litigation front, a class-action lawsuit against Facebook for BIPA violations regarding Facebook’s “faceprint” feature is pending in the Northern District of California. Google recently managed to fend off (on standing grounds) a similar lawsuit under BIPA in the Northern District of Illinois concerning its face template application. 

A pending Illinois bill (SB 3053) would provide a carve-out from BIPA for use of biometric data “exclusively for employment, human resources, fraud prevention, or security purposes,” if certain safeguards are in place; but Illinois’s new attorney general has announced his opposition to any weakening of the law. Meanwhile, California’s sweeping new privacy law, the California Consumer Privacy Act, scheduled to become effective in January 2020, will explicitly cover biometric data.

EU law

The former EU Data Protection Directive (Directive 95/46/EC) made no mention of biometric data. With the advent last May of the EU General Data Protection Regulation, biometric data is front and center. Under GDPR Article 9, biometric data (when used for the purpose of uniquely identifying a natural person) is among the “special categories” of personal data that is prohibited from being processed at all unless certain exceptional circumstances apply, and the definition of biometric data specifically refers to “facial images.”

Like Illinois’s BIPA, the GDPR makes an important distinction between facial-recognition data and photographs. Recital 51 of the GDPR states the distinction as follows: “The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.”

Although the GDPR doesn’t mention video images  such as those collected by a security camera  presumably the same principle will apply. Any images collected, whether via photos or videos, will only constitute biometric data if “specific technical means” are used to uniquely identify or authenticate an individual.

If facial-recognition technology is used for purposes that fall within the GDPR’s definition of biometric data, the only exception likely to be practical for many commercial applications of facial-recognition technology is where “the data subject has given explicit consent.” This requirement would appear to impose a nearly insurmountable hurdle in many common facial-recognition technology use scenarios, including where facial-recognition technology is used for marketing or security purposes. Any kind of passive consent e.g., an individual proceeding into an environment where facial-recognition technology is active after passing prominent signs indicating that facial-recognition technology is being employed — won’t pass muster under the GDPR.

Notably, however, Article 9(4) of the GDPR permits each EU member country to introduce certain derogations with respect to restrictions on processing biometric data (“member states may maintain or introduce further conditions, including limitations”). The Netherlands, for instance, has provided a carve-out for biometric data if necessary for authentication or security purposes, and Croatia’s new data protection law exempts surveillance security systems. It’ll be interesting to see if other EU members follow suit.

What’s the future of facial-recognition technology regulation?

The acknowledgment that biometric data, including facial-recognition data, constitutes personal information is certainly overdue. What’s still being worked out is what precautions (including notice and consent) are appropriate before facial-recognition data can be collected and used, and what exceptions may be warranted for security and fraud prevention purposes, among others.

Let’s face it (pun intended): Facial-recognition technology can be used in ways that actually improve privacy and security through more accurate authentication. Imagine faster check-in and passport control at airports, as well as heightened security; speedier and more accurate patient care at hospitals; and more efficient payment confirmations, whether online or at retail establishments. Apple says that the probability a random stranger could unlock your iPhone using Face ID is one in a million  which makes it 20 times more secure than Touch ID.

So the challenge becomes: How do we encourage legitimate uses of facial-recognition technology to flourish and reap the technology’s undeniable benefits, while preventing misuse and ensuring respect for privacy rights?

There would appear to be two divergent paths forward:

  1. The path of strict regulation, as illustrated by the GDPR (although even the GDPR allows EU member countries some flexibility to implement derogations).
  2. The more flexible path promoted by the U.S. Federal Trade Commission, as described in its October 2012 report “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies” (an approach also endorsed in its 2015 report on the internet of things). The FTC has recommended certain best practices in deploying facial-recognition technology, including building in privacy by design, implementing reasonable security protections, and providing consumer choice on a context-sensitive basis. But the FTC stopped short of endorsing regulation.

Or perhaps there’s a middle path. Microsoft President Brad Smith recently weighed in on the facial-recognition technology debate with a provocative proposal. While Smith is an advocate of market forces, he believes that new laws are required to prevent a facial-recognition technology “commercial race to the bottom” that results in the abuse of personal data. In particular, he recommends legislation that requires transparency (including documentation explaining the capabilities and limitations of the technology), enables testing for accuracy and unfair bias, and provides appropriate notice and consent. Simultaneously, Microsoft has adopted facial-recognition technology principles concerning fairness, transparency and the like that can serve as industry best practices. Given the current stage of facial-recognition technology development, Microsoft’s “incremental approach” seems eminently sensible.

 

photo credit: grantdaws Neon Effervescent Eye via photopin (license)

2 Comments

If you want to comment on this post, you need to login.

  • comment Emma Butler • Jan 30, 2019
    I also think a middle way is the right way. Too much of the debate on facial recognition is caught up in either government use of the tech and surveillance purposes, or marketing to people in shopping centres. Whereas there are many other, sensible, reasonable uses of the tech for things like fraud prevention, or as a secure access control (identification and authentication for things like accessing your phone, bank account, certain premises and so on). While there are some core principles / obligations that should apply to all uses of the tech, like transparency, privacy by design, testing for accuracy and unfair bias, other obligations should depend on the entity using the tech, the purposes and the overall context. We shouldn't look to regulate in the same way government use for surveillance and shopping centres marketing to you! 
    
    In the EU the Netherlands and Croatia recognised the growing use of biometrics (not just facial recognition) for fraud prevention and secure access purposes and created a lawful basis so it continued to be legal. (Croatia's provisions are more detailed and nuanced than suggested in the article.) Sadly the UK failed to do the same, despite being presented with the same arguments and a proposal for a new lawful basis. 
    
    The other aspect that doesn't get mentioned is that for these technologies to work effectively and be improved to stay ahead of those looking to game them, ongoing R&D and testing is needed. Testing for unfair bias often requires companies to know and tag data with things like gender and ethnicity. These things cannot and should not be done on the basis of consent from individuals. But the law fails to recognise and provide for this. The current law gives companies no choice but to ask for 'consent' for biometric data processing or for other sensitive data (metadata) but this is all part and parcel of developing the technology to work properly and fairly. Many EU DP laws still prohibit the collection / use of gender and ethnicity for bias avoidance or equality monitoring purposes. 
    
    Companies can offer an opt out from people having their data used for such R&D, but if too many people opt out it can seriously affect the tech development. One thing forgotten about is that even though R&D in companies needs lots of data, it is an internal activity to make stuff better or to make better stuff. It's not in anyone's interests to start selling the data or otherwise providing access to it to anyone outside the company. 
    
    I also think there is a difference between a company doing the R&D to develop and improve fraud detection mechanisms or security measures and a company whose R&D is to better target adverts. I would argue there should be more choices / opt outs for having your data used to improve marketing whereas R&D to improve security and fraud prevention benefits all the individuals in question and broader society. We expect our banks to be continually working to reduce and avoid fraud and keep our money secure. When the security measures move from 'mother's maiden name' to biometrics, our expectations of the bank don't change.  Yet the law puts up barriers.
  • comment Anthony Weaver • Feb 8, 2019
    A great article. I would also add that we need a clear line of sight between facial recognition and detection. Much of the interest from a Marketing perspective is less to do with the identification of individuals and more to do with the use of facial meta data ie. gender and approximate age as a marketing point. It seems that facial detection is brought into the “biometric” discussion when actually no information is collected that can uniquely identify an individual and any marketing is not targeted at a person but an I identified person in an age and gender bracket.