A few weeks ago, Facebook announced it would be asking Australian users to upload intimate photos of themselves. The idea is a trial “Non-Consensual Intimate Image Pilot” program to prevent intimate images from being shared without consent on the internet. It is a refreshing change from the nefarious purposes intimate photos are usually used for on the internet (see “revenge porn,” the colloquial term for illicit image posting).

This latest effort by Facebook builds off of its mission, announced in April, of “Using Technology to Protect Intimate Images and Help Build a Safe Community.” When someone’s intimate photos were publicly shared or posted on Facebook without their consent, Facebook asked users to report the image. Facebook’s Community Operations team would then review and remove the image, and use photo-matching technologies to prevent future attempts.

The new trial program goes one step further to nip “image-based abuse” in the bud. It will be implemented first in Australia, and if successful, will spread to the United States, United Kingdom, and Canada.

For now, Australians complete an online form on the Australian eSafety Commissioner’s official website and send an “image of concern” to themselves on Facebook Messenger. The Commissioner’s office notifies Facebook of the submission, and Facebook’s Community Operations team reviews and hashes the image. Facebook then deletes the image from its servers and stores the hash, and the user is asked to delete the image from Messenger as well. If someone then tries to upload the image on Facebook (or Messenger and Instagram), the image will be run through the hash database, and any matches will not be posted or shared.

Hashing is a process that is similar to encryption in that it protects data and prevents others from accessing it. But unlike encryption, where what is protected can be accessed by decryption (i.e., unlocking a safe), data that has been hashed is permanently protected and impossible to decrypt (i.e., locking a safe and throwing away the key). The hashing algorithm can only be run in one direction. For computers that use password protections (an area where hashing technology is commonly used), the password you set is hashed and saved by your computer; there is no way to obtain the password through the computer. When you next login, the password you enter is hashed by the computer and compared with the saved, hashed password to see if they are a match. The only way to hack the computer is by guessing the password.

By hashing intimate photos and videos users upload to Facebook, the files are no longer accessible to potential hackers for misuse. But should anyone upload a new file containing the same data the users uploaded, the file will be hashed and compared to the previously hashed files. If there are any matches, Facebook will not allow the file to be uploaded. 

The program raises important privacy concerns.

First, all uploaded files are vulnerable before they are hashed. Files can be hacked during the upload process (i.e., a user uploads photos from a remotely monitored or virus-compromised computer), or even before the upload (i.e., your phone is stolen before you have a chance to upload and delete the image). After the image is uploaded and received by Facebook, it will be viewed by human eyes prior to being hashed. While Facebook will presumably screen employees and take preventive measures against employee abuse of the images, it seems there is much potential for information to fall into the wrong hands.

Then, there is the issue of data storage. Privacy issues are raised whenever PII is collected from individuals and then compiled and stored somewhere. Stored data can always be hacked (see Yahoo, Equifax, Microsoft XP users, Target – just to name a few recent examples). Facebook somewhat bypasses this issue in that the intimate photos are only stored temporarily, before being hashed and deleted. But what about the images and videos that are temporarily stored at a given time? How long will it take for the eSafety Commissioner’s office to notify Facebook of a submission, and then for Facebook to respond, and then for the user to delete the photo? Will Facebook provide enough protection for pre-hashed images to guarantee they won’t be accessed and then publicly released – enabling the very problem Facebook is trying to preempt?

Alternative solutions to address image-based abuse in general (and not only on the Facebook platform) include: adjusting online image posting policies so that anyone wanting to post illicit images of others must ask for permission first, rather than forgiveness later, after the damage has been done; fully criminalizing “revenge porn laws,” difficult as they may be to draft; or using machine learning to recognize nude photos and prevent un-consented uploads, so human eyes never view the illicit images (see Nude, a new application utilizing machine learning to scan and hide intimate photos on your phone, and your images are never sent to the application itself).

Facebook should be commended for its desire to address and eliminate revenge porn. Presumably, Facebook also has privacy experts and a legal team that is highly aware of all of the potential issues with asking users to upload intimate photos. 

But any users considering participating in the trial or finalized version of the program should think twice about potential consequences and worst case scenarios of uploading intimate photos anywhere on the internet. Until technology improves to the point where data storage is truly secure, and PII laws guarantee proper legal protections and remedies to security breach victims, the best preemptive action to take is to refrain from sending intimate photos, or even better, not to take them in the first place. Taking nude Polaroids was already a risk 20 years ago; that risk remains today, but has been increased exponentially with online sharing capabilities. 

photo credit: Mr B's Photography Abandoned Tractor [Polaroid] via photopin (license)