As embarrassing as this is to admit to my fellow privacy peers, my Instagram account was recently hacked. In a moment when I wasn’t thinking logically, I clicked on a link a "friend" had sent me (unbeknownst to me, the friend's account had also been hacked). Ten minutes later, I was kicked out of my account and had my two-factor authentication changed to a different number.
I scrambled to send text messages to as many people as I could that my account was hacked and to not engage with it until things were resolved. However, once the good people at Meta helped me restore my account, I found out how many random people I was still connected to from various walks of my life. For the few days I was kicked out, most understood I was not likely in the business of selling crypto schemes or participating in "influencer ambassador" programs. The language and pictures the hacker used in my stories and personal messages didn't reflect my regular communication style.
But as I spent the time cleaning up the mess of the hack, I kept thinking "what if?" What if I was a major influencer whose livelihood depended on the platform? What if this hacker was someone who was familiar with my social posting style? Or what if the hacker wasn't even a real person, but was some form of artificial intelligence that knew me better than I knew myself?
A future like 1982's "Blade Runner" with the Voight Kampff test differentiating between humans and robots isn't that far off. We're just a few months into 2023 and we have seen an explosion of new generative AI tools. According to the World Economic Forum:
"Generative AI refers to a category of artificial intelligence algorithms that generate new outputs based on the data they have been trained on. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more."
Generative AI is not new, but products like OpenAI's ChatGPT, Stable Diffusion and Lensa ai have brought this technology mainstream.
Putting aside the important questions around pervasive bias, student plagiarism, trade secrets, workforce reduction and more, I am concerned about generative AI's potential to create fake virtual identities. Only the other day I heard an advertisement for Podcastle's Revoice, a solution that leverages generative AI to create a "digital copy of your own voice." According to their website, you can "save countless hours by allowing Revoice to handle your repetitive tasks … generate whatever you need in your natural voice just by writing up a script and letting our AI do the rest!" I'm sure there is much convenience here, but like most privacy tradeoffs, have we really thought about the real consequences?
Podcastle markets their product as safe and secure but given the minimal security language and vague retention timeline mentioned in their privacy statement, I don’t feel so reassured. If anything, this technology makes me think of the worst-possible case scenarios. Given my recent Insta-hack, someone could leverage my image and voice from a publicly available webinar recording and use these various tools to create content that looks and sounds just like me.
The "21st century’s answer to Photoshopping,"a.k.a deepfake technology, is a type of deep learning AI that creates fake content. From porn, to fake news, bullying, scams and more, deepfake technology has been a challenge for several years. Yet generative AI adds a whole new dimension. Now all you need is an idea and the AI can do the rest in terms of content creation. The U.S. Federal Trade Commission recently published an article about the myriad of malicious possibilities.
"Fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews or to help create malware, ransomware and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud."
The combination of deepfake technology with generative AI has created the ultimate "powder keg."
Is all hope lost for us then? I say not yet! One of the great things about technology is, as harmful as it can be, it can also be used for good. One such example is Worldcoin, a startup co-founded by Sam Altman, who is also the co-founder and CEO of OpenAI. Worldcoin's technology could, in part, serve as "proof of personhood" through its privacy-preserving technology, as described in a recent TechCrunch article:
"Once the scan is complete, the individual is added to a database of verified humans, and Worldcoin creates a unique cryptographic 'hash' or equation that's tied to that real person. The scan isn't saved, but the hash can be used in the future to prove the person's identity anonymously through the app, which includes a private key that links to a shareable public key. Because the system is designed to verify that a person is actually a unique individual, if the person wants to accept a payment or fund a specific project, the app generates a 'zero-knowledge proof' — or mathematical equation — that allows the individual to provide only the necessary amount of information to a third party."
Worldcoin's technology is still in its nascent stage, but this does seem to be the direction we are heading. Instagram used a video selfie I uploaded to prove I was the real version of me. Security companies like Clear are already widely leveraged in airports, stadiums and other semipublic venues. There are growing concerns around unfiltered biometric use and specifically how biometrics are used as proof of identity. But given the scary combination of deepfakes and generative AI, a privacy-preserving form of biometric authentication might just be our best recourse going forward.
Oscar Wilde once said, "Be yourself; everyone else is taken." But what happens to you when someone, or something, else takes up your identity? That question is one that we will soon have to answer.