There's no doubt the rapid growth of generative artificial intelligence and large language systems like ChatGPT is getting the attention of the privacy profession and taking the business world by storm.
During her keynote address at the IAPP Global Privacy Summit 2023 in Washington, D.C., author and generative AI expert Nina Schick demonstrated the eye-opening growth of ChatGPT, pointing out it only took five days for it to reach 1 million users and two months to reach 100 million users. This growth significantly outpaces other popular social networks and online streaming services like Instagram, Spotify and Facebook.
With that, ChatGPT is also getting the attention of data protection authorities.
Last week, Italy's DPA, the Garante, was the first regulator to take measures by temporarily banning ChatGPT and investigating the application's potential violation of the EU General Data Protection Regulation. Of note, the Garante alleges the Microsoft-backed OpenAI of not checking the age of the application's users. The Garante said ChatGPT did not have "any legal basis that justifies the massive collection and storage of personal data" in order to "train" the service. OpenAI now has less than three weeks to respond to the authority with solutions or face steep GDPR fines.
The European Union is actively working on the proposed EU AI Act, which would regulate so-called high-risk AI systems.
While on stage Tuesday, Schick, who said she often has to update her speech as news related to generative AI is constantly developing, predicted that by 2025, 90% of online content will be AI generated.
Fittingly, she'll have to update her speech for the next time. That's because, during her speech Tuesday, the Office of the Privacy Commissioner of Canada announced it launched an investigation into ChatGPT.
"AI technology and its effects on privacy is a priority for my Office," Privacy Commissioner Philippe Dufresne said. "We need to keep up with — and stay ahead of — fast-moving technological advances, and that is one of my key focus areas as Commissioner."
Though details on the investigation are still developing, the OPC said the investigation comes after a complaint was filed that alleges ChatGPT collected, used and disclosed personal information without consent.
Litigation and the rise of an inauthentic internet
In addition to regulatory enforcement, popular generative AI systems will likely face a rapid increase in litigation, according to Schick.
Axios recently reported that generative AI "is a legal minefield." Issues include whether AI companies have the right to the data that trains their systems and who is responsible when a system outputs misleading or dangerous information.
As more money flows toward developers of generative AI systems, companies are more willing to take risks. "The more money that flows in, the faster people are moving the goal posts and removing the guardrails," said Matthew Butterick, an attorney whose firm has litigation pending against several companies based on how their AI systems work.
For Schick, the lines between what is real and what is AI-generated is quickly blurring. True, the Pope's fake jacket and AI-generated pictures of former President Donald Trump being arrested and attempting to flee law enforcement may seem funny or easy to decipher now, but deepfakes and AI-generated videos of people — real or fake — will become more common and more advanced.
In a world premiere here at the Global Privacy Summit, Schick unveiled "the first digitally transparent piece of AI-generated content." She said Revel is cryptographically signed and that she is "working with a community of developers on an open standard that would be universally adopted across the internet so that the infrastructure is in place for people to always be able to see the provenance of content."
The goal is to create a more "authentic" internet by demonstrating transparency. Truepic, an authenticity infrastructure provider, partnered with Schick and Revel.ai, an ethical "leader in hyperrealistic and synthetic content."
"Untrustworthy digital content, akin to a poisoned well, endangers individuals, businesses, and society," Schick said. "The first signed AI-generated video proves that creating transparent digital media is both possible and vital. Possessing the antidote to a compromised information ecosystem, we must question why we haven't deployed it until now. It’s time to sign all digital content!"
In the meantime, the popularity of generative AI systems continues to grow, as will regulatory enforcement and litigation.
This report explores the state of AI governance in organizations and its overlap with privacy management.
If you want to comment on this post, you need to login.