At a U.S. House Committee on Oversight and Reform hearing Jan. 15, lawmakers continued to investigate the potential risks posed by both government and commercial use of facial-recognition technology. The hearing was the third in a series on the topic as the committee seeks guidance on how to legislate use of the controversial surveillance tool, a tool law enforcement cites as essential to thwarting dangerous crimes against the American public and private industry seeks to use for everything from keyless housing entry to timecard login to airline boarding.
The topic has united lawmakers from both Democratic and Republican parties, though some have called for an outright ban on the technology until its uses and risks can be thoroughly vetted, and others have called for a less severe approach given its merits on fighting crime. Committee Chairwoman Rep. Carolyn Maloney, D-N.Y., said the technology is clearly not ready for "prime time" and that the committee is "committed to introducing and marking up common-sense facial-recognition legislation in the near future." Rep. Jim Jordan, R-Ohio, has been
Wednesday's hearing follows a December 2019 report by the National Institute of Standards and Technology on its Face Recognition Vendor Testing Program, which, in 2018, studied 127 facial-recognition algorithms from 39 commercial developers and one university and using 26 million mugshots from the Federal Bureau of Investigation. As NIST's Charles Romine testified, there have been vast improvements to many algorithms' accuracy rates since NIST's last study in 2013.
That said, accuracy rates across algorithms vary "by factors of 10 to beyond 100 times." False positives on identification may "present privacy and civil rights and civil liberties concerns, such as when matches result in additional questioning, surveillance, errors in benefit adjudication or loss of liberty," Romine testified. In addition, false positive rates are higher for woman than for men and are higher for Asian and African American faces than Caucasians.
That's something of grave concern to representatives like Rashida Tlaib, D-Mich., who said she represents districts that are predominantly black and therefore would be disproportionately affected by false-positive matches.
In a now-well-known study by the American Civil Liberties Union, often referred to by committee members Wednesday and in hearing's past, Amazon's "Rekognition" program incorrectly identified the faces of 28 members of Congress as positive matches to criminal mugshots.
And while the hearing was slated to focus on private sector use of the technology, conversation frequently came back to how the American public is being surveilled now and may be in future by those in power.
"The real concern," Jordan said, "is what's happening with the government and how the government may use this technology ... and we know ... facial-recognition technology was used by Baltimore police to monitor protesters after the death of Freddie Gray a few years ago in Baltimore."
Meredith Whittaker, co-founder of the AI Now Institute, testified that while the technology has raced forward, "marketing these technologies, making claims to accuracy, what we have not seen is validation race forward or mechanisms for consent. I think we need to pause the technology and let the rest of it catch up so we don't allow corporate interests and corporate technology to race ahead and be built into our core architecture without putting safeguards in place."
Rep. Alexandria Ocasio-Cortez, D-N.Y., wanted to know about potential remedies for those aggrieved.
"What can a consumer or constituent like mine do if the’ye been harmed?" she asked. "A computer algorithm will tell a black person they have likely committed a crime when they are innocent; how can a consumer or constituent really have any resource against a company or agency if misidentified?"
Whittaker said right now, there are "very few ways." Illinois legislation on biometric privacy allows for litigation, but "one, you have to know it’s been collected; two, you have to know it's been misused; and three, you have to have the resources to bring a suit," which many of those aggrieved will not have.
"This is some real life 'Black Mirror' stuff that we’re seeing here," Ocasio-Cortez said, referring to the popular science fiction series. "And I think it’s really important that everyone really understand what's happening, because this is happening secretly, as well."
But Rep. Kelly Armstrong, R-N.D., was more measured in his approach, warning his colleagues that the technology has great merit and it's the governments job to figure out how to best use it without violating the rights of the citizenry.
"Our job is to get it right," Armstrong said of crafting federal legislation. "But part of doing that is recognizing that it’s here, and in some way, shape or form, it’s gonna continue to be here. ... There are tons of positives, but there are dangers. We’re going to use it, but we have to be cognizant of the fact that this is a little different than a lot of things because identity is something that can never go away once it’s been identified."
To watch the hearing in full, see the archived video here.