Technology pundits frequently lament that our increasingly digital world has eroded consumer privacy by enabling businesses to collect and use more personal data. However, what is often lost in the conversation is that the growing use of artificial intelligence actually increases the potential for consumer privacy by reducing the number of humans who see their personal information.
People increasingly use AI for everyday tasks, from organizing their calendars to processing business receipts to sorting photos. For many of these services, users expect that no human is reviewing or processing their data. This is a privacy benefit because consumers are often more comfortable with computers processing their personal data than humans. For example, a recent study shows that individuals prefer dealing with remote entities that use computers to process data, rather than “immediately-present people that could judge them.” This distinction is also one reason why Gmail has flourished as a popular email service, despite complaints from privacy advocates about Google’s algorithms “reading” their emails to deliver contextual ads. This situation recalls the classic thought experiment about a tree falling in the forest: If a computer processes personal data and no human is around to review it, does it really violate anyone’s privacy?
A recent brouhaha involving unexpected human processing of personal data illustrates this point. In November, consumers discovered that Expensify—a popular app that employees use to quickly record and submit business-related expenses—was crowdsourcing human workers to manually enter data. Many users reasonably believed that the company did this task automatically, given that Expensify advertises that its “SmartScan” feature uses optical character recognition technology to automatically scan receipts and copy important information into convenient reports. What many consumers did not know is that Expensify decided to have humans review the receipts manually when its OCR technology failed to properly read a scanned receipt.
Expensify had switched in 2012 from using Amazon Mechanical Turk, an online crowdsourcing marketplace where companies hire individuals to complete brief tasks, to an internal team of human reviewers to address scanning errors. But it recently switched back to Amazon Mechanical Turk while testing a new feature. As a result, several Turk workers gained access to receipts consumers had submitted to Expensify that contained personal information—such as hotel receipts, addresses, and signatures. Turk workers took to social media to point out this potential privacy violation. While Turk workers ultimately processed relatively few receipts (less than 0.00004 percent of its customers) compared to Expensify’s internal team, many consumers did not expect humans would be accessing their personal data.
This case highlights two major points that policymakers should consider going forward.
First, companies should not mislead consumers about when humans, rather than computers, have access to sensitive personal data. If companies claim to be exclusively using AI to process personal data, but instead have humans involved in the data processing, the Federal Trade Commission should hold companies responsible for engaging in unfair and deceptive practices.
Second, policymakers should encourage companies to automate more processes involving personal data to minimize its potential exposure to human workers. Many of the most serious privacy violations occur when sensitive personal data gets into the hands of another person. Every year, for example, hospitals discover medical workers who peek at the medical files of celebrities. By increasing the use of AI, organizations can reduce the potential for this type of exposure.
In short, while the amount of data that businesses collect has been on the rise, along with the benefits to society from the use of that data, the growth of AI will create new opportunities to minimize when and how workers at these businesses access this personal data, thereby increasing consumer privacy.
photo credit: National Institutes of Health (NIH) Neurological connections via photopin