Greetings from Portsmouth, New Hampshire!
We’re finally coming out of a cold and wet weather pattern here in New England. I did not need a jacket on my way into work today, and if I squint just enough, I can convince myself the sun is peeking through the clouds. I hope your local weather has been more spring appropriate.
Facial-recognition technology received a lot of attention this week. San Francisco banned it; Oakland, California, and Somerville, Massachusetts, are considering doing the same; authors opined about it; and I had the opportunity to hear from thought leaders on the subject at a conference last Friday. You may also remember the criticism directed at the permissive facial-recognition technology provision of the Washington Privacy Act before that bill’s demise. The technology is a flashpoint for issues and opinions.
Though my words here will not resolve the debate surrounding facial-recognition technology, I think it is worthwhile to highlight some takeaways from last Friday’s conference, "About Face: The Changing Landscape of Facial Recognition," at Northeastern University Law School.
The conference featured keynote speaker Cyrus Farivar, an investigative tech reporter for NBC News, and two panels moderated by New York Times reporter Natasha Singer. The first panel, “Understanding the Social Impacts and Challenges of Facial Recognition Technology,” featured panelists included the ACLU Massachusetts' Kade Crockford, the Future of Privacy Forum's Brenda Leong, and professor and author Chris Gilliard. The panelists described the different sophistication phases of the technology — face/not fact recognition, facial characteristics, verification and identification with increasing levels of invasiveness — and the risks it poses to underrepresented and marginalized communities. In a conversation about the well-documented challenges facial-recognition technology has had to identify certain races or genders of individuals, Gilliard commented that “a computer cannot identify a gender because a person does that for themselves.” The statement highlights the normative value-signaling questions bound up in facial-recognition technology.
The second panel, “Regulatory Possibilities and Problems,” highlighted the lack of regulation governing the deployment of facial-recognition technology and how quickly the landscape has changed in the past five years due to advancements in processing power and access to large data sets. The panelists included Jennifer Lynch from the Electronic Frontier Foundation, Clare Garvie from the Center on Privacy and Technology at Georgetown Law, and Jadzia Pierce from Covington & Burling. Surprising was the fact that the U.S. government most frequently contracts with non-U.S. companies to provide facial-recognition technology.
The content of the conference was phenomenal, and I encourage you to seek out publications from each of the panelists or their organizations for more information about salient facial-recognition technology issues. My key takeaway was that changes in the facial-recognition technology space are happening rapidly, and there are consequential policy choices that must be made, but companies and local governments are operating in a relative regulatory vacuum. Pro- or anti-facial-recognition technology, there is urgency for a more robust and public debate about its risks and benefits (incidentally, the House Committee on Oversight and Reform announced yesterday a hearing on facial-recognition technology, scheduled for Wednesday, May 22).
Facial-recognition technology was not the only privacy story in the U.S. this week: Location-data-sharing practices again drew the ire of a congressperson, the Network Advertising Initiative published a new code of conduct, and California Attorney General Xavier Becerra hopes the effective date for the CCPA does not mimic the rollout of Obamacare. Find these stories and more below.
If you want to comment on this post, you need to login.