Dear, privacy pros.
It has been difficult to keep up with updates on the privacy front in Singapore, given the constant barrage of breaking news and endless press coverage over the COVID-19 virus situation. Whether you are based in Singapore or reading this in another country, I hope that all readers will stay healthy and safe.
Based on the articles I managed to review, one big topic that seems to be gaining a lot of traction recently is the use of artificial intelligence–driven facial-recognition technology, as foreshadowed in our crystal-ball-gazing edition of the Asia-Pacific Digest at the end of 2019.
It all started with the New York Times exposé on tech startup Clearview AI, whose operations appear to be shrouded in secrecy. This was followed by further revelations regarding the extent that law enforcement agencies, including federal agencies like the U.S. Federal Bureau of Investigation, are leveraging the company’s database of 3 billion photographs, with more than 600 law enforcement agencies signing up to the service within the last year alone. Concerningly, law enforcement officers are now using the tools provided by Clearview not only to identify the perpetrators of crime, but also victims of child abuse.
Now, this little company is no longer unknown, and much ink has already been spilled over privacy implications arising from the use of such technology. While I would not go so far as to say that this portends the end of privacy as we know it, it is clear that more widespread availability and use of these tools present a risk to the normative concepts and standards of privacy.
Apart from the legality and sustainability of Clearview’s business model (scraping photos off social media sites and other publicly available sources in apparent contravention of applicable terms of service), it seems clear to me that the technology the company controls and the power this allows it to wield cannot remain unregulated.
Based on the NYT podcast, it would appear that Clearview’s founder and investors have barely started to grasp some of these issues. Besides law enforcement agencies, Clearview has allegedly started providing services to corporate entities like security companies. Even if there is currently no intention to do so, there seems to be real danger that, if left unregulated, Clearview (or other startups in its wake) might be pushed by VCs participating in subsequent rounds of funding to further monetize the technology by making it generally available to consumers.
Not unsurprisingly, the backlash has begun. A number of tech giants have served cease-and-desist letters to Clearview, asking it to stop scraping photos from their sites and to remove previously scraped images from its database. Class-action lawsuits have been filed in Illinois and Virginia. A moratorium on the use of Clearview’s technology by police officers has been imposed in New Jersey, and a number of similar bills have been presented in other U.S. states.
As you ponder these meaty issues, the opinion pieces on the privacy debate, protecting privacy in an AI-driven world and best practices companies should adopt in an ever-changing privacy landscape should also prove instructive.
With that, I will leave it to you to digest the articles.
P.S. A gentle reminder that the call for proposal for this year’s Asia Privacy Forum closes 16 Feb. We look forward to receiving your submission if you are interested in speaking at what promises to be another stellar event!
If you want to comment on this post, you need to login.