TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Tracker | Privacy and racial justice: Regulating facial recognition technology Related reading: How should we regulate facial-recognition technology?

rss_feed

""

""

Realization of the disparate, negative impacts of facial recognition technologies on different ethnic and racial groups, as well as the lingering privacy concerns related to their use, have made companies increasingly hesitant to tie their bottom line to them. IBM, for example, recently announced that it would exit the facial recognition business due to concerns over racial bias inherent in the technology. The decision was reported to be in response to the killing of George Floyd under the custody of the Minneapolis police. Similarly, in response to the nationwide protests that occurred in the aftermath, Amazon put a moratorium on the police use of its facial recognition technology, while Microsoft said that it would not sell its facial recognition technology to police departments until a federal law that is “grounded in human rights” is passed.

Regulating facial recognition technology

Dozens of pieces of federal legislation have been proposed that would regulate the use of facial recognition technologies on a national scale. U.S. Rep. Rashida Tlaib, D-Mich., has introduced a bill that would prohibit the federal government from funding facial recognition technologies. On Twitter, Tlaib has outrightly described facial recognition technology as “racist,” pointing to a study done by the U.S. National Institute of Standards and Technology that found a majority of facial recognition technologies perform worse on nonwhite faces. Indeed, U.S.-developed algorithms had higher false-positive rates for one-to-one matching — where a photo of a person is matched to another photo of that same person in a database — for Asians, African Americans and Native Americans. Meanwhile, for one-to-many matching — when determining if a person has any matches in a database — these algorithms were most likely to generate a false positive for African-American women, “which puts this population at the highest risk for being falsely accused of a crime.”

NIST studies, which are considered “the gold standard for evaluating facial recognition algorithms,” are not alone in finding gaps in the error rates between men and women and the highest false match rates for black females. The Department of Homeland Security’s Science and Technology Directorate also released a study last year that found commercial facial recognition technologies took longer and were less accurate when processing darker skin.

Scholars and researchers have also uncovered racial- and gender-based disparities in the accuracy rates of facial recognition technologies that would likely result in distributing their risks more heavily onto more vulnerable members of society, particularly black women. Joy Buolamwini of The MIT Media Lab and Timnit Gebru of Microsoft Research have taken the lead in studying bias in facial recognition technologies and algorithms. Specifically, they have looked at whether facial recognition technologies can accurately determine the gender of a face. In their study, they ran the faces of parliamentarians from three African countries and three European countries through three different facial recognition technologies produced by IBM, Microsoft and Megvii, a Chinese technology company. While the accuracy rate at identifying the gender of males for all three pieces of software was in the upper 90s, the accuracy rate for identifying female faces ranged from 79% to 89%. Each software also performed better on lighter-skinned subjects (again, upper 90%) than on darker-skinned subjects (78% to 87%). Thus, the study further demonstrates that facial recognition technologies tend to be the least accurate when processing data on darker-skinned females and the most accurate when processing data on lighter-skinned males.

Several members of Congress have called for blanket restrictions on the use or funding of facial recognition technologies by the federal government. A bicameral bill introduced in the wake of the George Floyd protests, the Justice in Policing Act, introduced by Reps. Karen Bass, D-Calif., chair of the Congressional Black Caucus, and Jerry Nadler, D-N.Y., chair of the House Judiciary Committee, and Sens. Cory Booker, D-N.J., and Kamala Harris, D-Calif., mandates the use of body cameras by law enforcement officers but prohibits them from employing facial recognition technology.

Another bill, called the Ethical Use of Facial Recognition Act introduced by Sen. Jeff Merkley, D-Ore., puts a moratorium on the use of facial recognition technologies by federal agencies, at least until Congress can further study their uses. In its findings, the text of the bill notes, “Facial recognition has been shown to disproportionately impact communities of color, activists, immigrants, and other groups that are often already unjustly targeted.” A more recent iteration of this kind of legislation is the Facial Recognition and Biometric Technology Moratorium Act of 2020, which was sponsored by Democrats in both the House and Senate. The bill makes it unlawful for any federal agency or official to “to acquire, possess, access, or use in the United States any biometric surveillance system” or “information derived from a biometric surveillance system operated by another entity.”

There is support, however, for expanding the use of facial recognition technologies amongst some lawmakers. Rep. Earl L. "Buddy" Carter, R-Ga., recently introduced a bill, called the Advancing Facial Recognition Act, that requires the Department of Commerce and Federal Trade Commission to jointly study the impact of facial recognition technology on interstate commerce. This would involve, among other things, identifying “each Federal rule, regulation, guideline, policy, and other Federal activity … related to facial recognition technology” as well as assessing the “potential concrete harms to individuals related to the use of facial recognition technology.”

Thus, regardless of the partisan differences that continue to hinder progress toward an omnibus federal privacy law, members of both parties seem to agree that at least some form of federal regulation is needed for facial recognition technologies. As of late last year, at least two pieces of bipartisan legislation that would regulate facial recognition technology were introduced in the Senate. The Commercial Facial Recognition Privacy Act of 2019 would prohibit the use of facial recognition technologies in the absence of affirmative consent from individuals. This bill was sponsored by Sen. Roy Blunt, R-Mo., and cosponsored by Sen. Brian Schatz, D-Hawaii, and was last referred to the Committee on Commerce, Science, and Transportation. Another piece of bipartisan legislation, introduced in November 2019 by Sen. Christopher Coons, D-Del., and cosponsored by Sen. Mike Lee, R-Utah, was named the Facial Recognition Technology Warrant Act. It would require law enforcement agencies, such as the Federal Bureau of Investigation and Immigration and Customs Enforcement, to obtain a warrant to use facial recognition technology to surveil individuals. It was last referred to the Committee on the Judiciary.

Yet, while there is no federal privacy law to regulate the use of facial recognition technologies, and congressional action has stalled, states have stepped in to fill the void. Dozens of pieces of legislation are pending in U.S. state legislatures that would regulate the use of facial recognition technologies. Late last year, California enacted a three-year moratorium on the use of facial recognition in police body cameras, a law that began to take effect in January 2020. The Massachusetts Senate also just passed a police reform bill, which will be sent to the House, that puts a temporary moratorium on government use of facial recognition technology. Boston had already joined cities such as San Francisco and Oakland, California, that have passed ordinances banning police use of the technology. With so many state bills on the docket and with these issues currently on the national agenda, there are many chances to see more laws like this passed in other states.

Conclusion: Privacy in tandem with, not against, other fundamental rights

Efforts to protect the right to privacy have long shared the same goals as efforts to preserve other fundamental rights. Stated differently, technologies that pose risks to privacy can also pose risks to other fundamental rights. In particular, in the case of facial recognition technology, it “doesn’t just pose a grave threat to our privacy, it physically endangers Black Americans and other minority populations in our country,” as Sen. Edward Markey, D-Mass., recently stated. As the Ethical Use of Facial Recognition Act, sponsored by Sen. Jeff Merkley, D-Ore., and co-sponsored by Sen. Cory Booker, D-N.J., states in its findings, “There is evidence that facial recognition has been used at protests and rallies, which could chill speech.” As the COVID-19 pandemic has demonstrated, the same is true of the nexus between privacy and public health: efforts to protect privacy and promote public health do not compete with one another, but gain strength from the other. Indeed, fundamental rights — to privacy, health, non-discrimination, and expression — are not mutually exclusive, but mutually reinforcing.

Thus, addressing the existing privacy concerns around the use of facial recognition technologies should be accomplished alongside efforts to ensure that these technologies do not exacerbate injustices and inequality based on protected characteristics such as race and gender.

Photo by Tyrell Charles on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.