With new technologies, the timeline from “that’s freaky!” to “ho-hum” is rapidly getting compressed. That’s certainly proving true of facial-recognition technology.
In the film “Minority Report,” released in 2002, Director Steven Spielberg envisioned a world (set in 2054) in which every individual would be instantly recognized (and marketed to) in public spaces, such as shopping malls. At the time, it was truly science fiction. Today, the Chinese government has embarked on a project to match faces to its database of 1.3 billion ID photos in seconds, with a target of achieving 90 percent accuracy. And many of us are routinely using Apple’s Face ID (made available in 2017 with the release of the iPhone X) to unlock our phones, or Facebook’s “tag suggestions” feature to identify our friends in photos.
(Note to film buffs: We acknowledge that “Minority Report” involved retinal scans rather than facial-recognition technology — allowing the hero John Anderson, played by Tom Cruise, to escape detection via an eyeball transplant — but the principle is the same.)
The privacy concerns with facial-recognition technology are obvious: Nothing is more “personal” than one’s face. So how is the processing of facial data regulated, whether such data is collected by a government agency as in China or by a private entity like Apple or Facebook? And as facial-recognition technology use becomes more pervasive (as widely predicted), what restrictions are appropriate in the future?
In this article, we first look at what current data protection laws have to say about the use of facial-recognition technology, with a specific focus on U.S. and EU law. We then consider the future of facial-recognition technology regulation.
US law
U.S. law addressing facial-recognition technology is a patchwork — and a small patchwork, at that.
No federal law regulates the collection of biometric data, including facial-recognition data. At the state-law level, three states (Illinois, Washington and Texas) have passed biometric legislation, but the Washington statute doesn’t specifically encompass facial-recognition technology. The Illinois Biometric Information Privacy Act, which alone includes a private right of action, is “the gold standard for biometric privacy protection nationwide” in the view of the Electronic Frontier Foundation.
BIPA defines a “biometric identifier” to include a scan of “face geometry,” but specifically excludes photographs. An ancillary defined term under BIPA is “biometric information,” which is information “based on an individual’s biometric identifier used to identify an individual.”
Unlike the EU’s data protection laws (discussed below), BIPA doesn’t begin from a position of prohibiting the use of biometric data but rather puts guardrails around how it’s used. Specifically:
- Private entities collecting biometric identifiers or biometric information must have a written policy, made publicly available, that sets a retention schedule for destroying such information when the initial purpose of collection has been satisfied or within three years of the individual’s last interaction with the private entity, whichever occurs first.
- When an individual’s biometric identifier or biometric information is first collected, the individual must: be informed that the biometric data is being collected, be told of the “purpose and length of term” of the biometric data collection, and provide a written release.
- Biometric data can’t be sold, and it can’t be disclosed unless the individual concerned consents to the disclosure (or certain other exceptional circumstances apply – e.g., required by law).
No surprise that BIPA, with its private right of action, has been the subject of numerous lawsuits, including several focused on facial-recognition technology. Most notably on the facial-recognition technology litigation front, a class-action lawsuit against Facebook for BIPA violations regarding Facebook’s “faceprint” feature is pending in the Northern District of California. Google recently managed to fend off (on standing grounds) a similar lawsuit under BIPA in the Northern District of Illinois concerning its face template application.
A pending Illinois bill (SB 3053) would provide a carve-out from BIPA for use of biometric data “exclusively for employment, human resources, fraud prevention, or security purposes,” if certain safeguards are in place; but Illinois’s new attorney general has announced his opposition to any weakening of the law. Meanwhile, California’s sweeping new privacy law, the California Consumer Privacy Act, scheduled to become effective in January 2020, will explicitly cover biometric data.
EU law
The former EU Data Protection Directive (Directive 95/46/EC) made no mention of biometric data. With the advent last May of the EU General Data Protection Regulation, biometric data is front and center. Under GDPR Article 9, biometric data (when used for the purpose of uniquely identifying a natural person) is among the “special categories” of personal data that is prohibited from being processed at all unless certain exceptional circumstances apply, and the definition of biometric data specifically refers to “facial images.”
Like Illinois’s BIPA, the GDPR makes an important distinction between facial-recognition data and photographs. Recital 51 of the GDPR states the distinction as follows: “The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.”
Although the GDPR doesn’t mention video images — such as those collected by a security camera — presumably the same principle will apply. Any images collected, whether via photos or videos, will only constitute biometric data if “specific technical means” are used to uniquely identify or authenticate an individual.
If facial-recognition technology is used for purposes that fall within the GDPR’s definition of biometric data, the only exception likely to be practical for many commercial applications of facial-recognition technology is where “the data subject has given explicit consent.” This requirement would appear to impose a nearly insurmountable hurdle in many common facial-recognition technology use scenarios, including where facial-recognition technology is used for marketing or security purposes. Any kind of passive consent — e.g., an individual proceeding into an environment where facial-recognition technology is active after passing prominent signs indicating that facial-recognition technology is being employed — won’t pass muster under the GDPR.
Notably, however, Article 9(4) of the GDPR permits each EU member country to introduce certain derogations with respect to restrictions on processing biometric data (“member states may maintain or introduce further conditions, including limitations”). The Netherlands, for instance, has provided a carve-out for biometric data if necessary for authentication or security purposes, and Croatia’s new data protection law exempts surveillance security systems. It’ll be interesting to see if other EU members follow suit.
What’s the future of facial-recognition technology regulation?
The acknowledgment that biometric data, including facial-recognition data, constitutes personal information is certainly overdue. What’s still being worked out is what precautions (including notice and consent) are appropriate before facial-recognition data can be collected and used, and what exceptions may be warranted for security and fraud prevention purposes, among others.
Let’s face it (pun intended): Facial-recognition technology can be used in ways that actually improve privacy and security through more accurate authentication. Imagine faster check-in and passport control at airports, as well as heightened security; speedier and more accurate patient care at hospitals; and more efficient payment confirmations, whether online or at retail establishments. Apple says that the probability a random stranger could unlock your iPhone using Face ID is one in a million — which makes it 20 times more secure than Touch ID.
So the challenge becomes: How do we encourage legitimate uses of facial-recognition technology to flourish and reap the technology’s undeniable benefits, while preventing misuse and ensuring respect for privacy rights?
There would appear to be two divergent paths forward:
- The path of strict regulation, as illustrated by the GDPR (although even the GDPR allows EU member countries some flexibility to implement derogations).
- The more flexible path promoted by the U.S. Federal Trade Commission, as described in its October 2012 report “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies” (an approach also endorsed in its 2015 report on the internet of things). The FTC has recommended certain best practices in deploying facial-recognition technology, including building in privacy by design, implementing reasonable security protections, and providing consumer choice on a context-sensitive basis. But the FTC stopped short of endorsing regulation.
Or perhaps there’s a middle path. Microsoft President Brad Smith recently weighed in on the facial-recognition technology debate with a provocative proposal. While Smith is an advocate of market forces, he believes that new laws are required to prevent a facial-recognition technology “commercial race to the bottom” that results in the abuse of personal data. In particular, he recommends legislation that requires transparency (including documentation explaining the capabilities and limitations of the technology), enables testing for accuracy and unfair bias, and provides appropriate notice and consent. Simultaneously, Microsoft has adopted facial-recognition technology principles concerning fairness, transparency and the like that can serve as industry best practices. Given the current stage of facial-recognition technology development, Microsoft’s “incremental approach” seems eminently sensible.
photo credit: grantdaws Neon Effervescent Eye via photopin