Over the weekend, The New York Times reported on a little-known company called Clearview AI. The startup has "devised a groundbreaking facial recognition app" that allows a user to "take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared." The app is now being used by more than 600 law enforcement agencies, including the U.S. Federal Bureau of Investigation and Department of Homeland Security. Its system includes a database of more than 3 billion images that the company said it "scraped from Facebook, YouTube, Venmo and millions of other websites."
Reporter Kashmir Hill spent months investigating the startup and interviewed the brains behind the company, Hoan Ton-That. The feature, titled "The Secretive Company That Might End Privacy as We Know It," appeared "above the fold" on the front page of the Sunday edition of the newspaper — vaunted space for an article about privacy. It also suggested the app is helping law enforcement solve countless cases, but it also cites several concerns about discrimination and its surveillance use by bad actors. (Plus, who wants to be shamed for wearing pajamas in public?)
The explosive piece is part of a larger conversation taking place, both in the United States and abroad, about whether and how the powerful technology should be regulated. Washington state, for example, just reintroduced comprehensive privacy legislation that features an in-depth section on regulating facial-recognition technology.
Read more about the Washington Privacy Act:
"Comparing the new Washington Privacy Act to the CCPA," by Faegre Baker Daniels Associate and former IAPP Westin Research Fellow Mitchell Noordyke, CIPP/E, CIPP/US, CIPM
"Washington state hearing debates reintroduced, controversial privacy bill," by IAPP Staff Writer Jennifer Bryant
Last week, in a leak to the media, European Union officials are said to be considering a three- to five-year ban of the technology in public spaces. In response Monday, Alphabet CEO Sundar Pichai said, "I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it. It can be immediate but maybe there's a waiting period before we really think about how it's being used."
Microsoft Chief Legal Officer Brad Smith, however, took a different tact, saying, "I'm reluctant to say let's stop people from using technology in a way that will reunite families when it can help them do it. ... The second thing I would say is you don't ban it if you actually believe there is a reasonable alternative that will enable us to, say, address this problem with a scalpel instead of a meat cleaver."
Scholars Woodrow Hartzog and Evan Selinger have been calling for the technology to be banned for the past year. They argue that obscurity, a key ingredient to public anonymity and personal freedom, is lost when ubiquitous facial recognition observes the public space. Some municipalities have followed suit, outright banning the technology, including in San Francisco and Oakland, California, as well as Somerville and Brookline, Massachusetts.
Last week, the U.S. House Committee on Oversight and Reform held the third in a series of hearings on the potential risks posed by government and commercial use of facial-recognition tech. Notably, there was some bipartisan support for regulating the technology, while there was less agreement on an outright ban.
However, Harvard Kennedy School Fellow and longtime security thought leader Bruce Schneier argues that banning facial-recognition tech misses the point. "Focusing on one particular identification method misconstrues the nature of the surveillance society we're in the process of building," he writes in an op-ed for The New York Times. Schneier says mass surveillance can be broken down into three components: identification, correlation and discrimination.
Facial recognition is perhaps the most talked about surveillance technology in the market right now, but Schneier points out there are other technologies that can identify at a distance, including through gait tracking or even heartbeats (apparently by using a laser-based system). As other examples, our smartphones constantly emit media-access-control addresses that can be tracked over time, and automatic license-plate readers can easily track the movement of cars.
Correlation, Schneier says, is facilitated by data brokers, and there is currently only one law in the U.S. — in the state of Vermont — that regulates this industry. "The point is that it doesn't matter which technology is used to identify people," he writes. "What's important is that we can be consistently identified over time."
In short, all three components need to be regulated and one specific technology should not be singled out, he says.
It's also worth noting that, according to Kashmir Hill's report on Clearview AI, the company scraped public-facing data off social media and other websites. The issue reminds me of the legal battle that is still developing between LinkedIn and hiQ. The IAPP's Rita Heimes, CIPP/E, CIPP/US, CIPM, analyzed the case for Privacy Tracker last September.
In it, the U.S. Court of Appeals for the 9th Circuit ruled information posted to social media sites and are publicly accessible may be scraped and collected by third parties "regardless of the social media sites' terms and conditions or even technical means taken to prevent data mining." Though the case is not yet final, Heimes points out that the opinion that was issued last Sept. 9 has "significant implications for the personal data marketplace."
Read more about data scraping:
"Data scraping and the implications of the latest LinkedIn-hiQ court ruling," by IAPP General Counsel and Data Protection Officer Rita Heimes, CIPP/E, CIPP/US, CIPM
How will the court's interpretation of public data scraping affect other services, like the facial-recognition tech services of Clearview AI?
That notion aside, with Washington state debating a comprehensive privacy bill and indications that several other states are mulling their own comprehensive privacy legislation, motivation on Capitol Hill to get federal privacy legislation passed will only increase.
With that in mind, will potential U.S. privacy legislation ultimately address these core issues around mass surveillance? Should facial recognition be banned in public like we've seen in some municipalities, or is that missing the point? I'd love to hear your thoughts.