This article is the first installment of a three-part series that will unpack platform liability laws in the U.S. and the EU and analyze their potential application to generative AI. Parts 2 and 3 will analyze the landscape in the U.S. and EU, respectively.

The modern-day internet is a complex and vast digital ecosystem integrating artificial intelligence, the Internet of Things and advanced cybersecurity. It is continuously evolving and shaping society in profound ways. Today, of the 8 billion people in the world, the internet is accessible by 5.35 billion  — about 66% of the global population. Our daily lives are complexly intertwined with our digital ones, with even a push toward shifting fully virtual through initiatives such as the metaverse.

Increased dependence on the internet for commerce, entertainment and even social connection has also increased the powerful role intermediaries — such as search engines and social media platforms — play in shaping virtual lives and experiences. With AI, specifically generative AI, now part of the equation, the implications for privacy, intellectual property, misinformation and disinformation, and free speech are amplified. This has led to concerns regarding the continuing role of legislation from the early days of the internet, such as Section 230 of the Communications Decency Act, Section 512 of the Digital Millennium Copyright Act and the EU e-Commerce Directive, in governing the modern-day internet. More specifically, it has led to concerns about whether the liability of intermediaries for harm caused by users should be redefined.

The law's role in shaping the internet

The early days of the internet in the late 20th century were marked by a new era of innovation characterized by unprecedented connectivity and information exchange, which revolutionized how people communicated, accessed information and conducted their business.

However, as with any new technology, the benefits of the internet were also accompanied by harms that had to be balanced against the technology's immense promise for innovation and economic growth. The fight for a free and open internet even led to John Perry Barlow's manifesto, the Declaration of the Independence of Cyberspace, a passionate libertarian document defending innovation and arguing against government regulation of the internet.

Consequently, to guarantee continued growth of the emerging digital economy and to preserve free speech, intermediaries were granted immunities and safe harbors against legal liability. Essentially, intermediary liability regulations exempt internet intermediaries from liability that may arise from users' online activities.

Such immunity for intermediaries seeks to achieve three goals:

  • Promoting economic activity and innovation.
  • Protecting freedom of speech.
  • Encouraging intermediaries to deal with illegal content and prevent harm.

In the U.S., two separate laws enacted in the late 20th century shield intermediaries from liability, namely Section 230 of the Communications Decency Act of 1996 and Section 512 of the Digital Millennium Copyright Act of 1998. They remain in force today.

Section 230 of the CDA provides a strong federal immunity to intermediaries. It prevents providers or users of interactive computer services from being held liable "as the publisher or speaker of any information provided by another information content provider." That is, an online intermediary such as a website will not be held liable for hosting any third-party content.

This provision offers two protections to intermediaries. It provides them with immunity from liability for harmful or illegal content of third-party users, as well as for voluntary and good faith content moderation to remove or restrict access to content on their platforms.

This provision is what arguably sets internet intermediaries apart from traditional newspapers. The former are free to structure content moderation in the manner they prefer without fear of liability, whereas the latter can be held liable because they exercise editorial control over the published content.

On the other hand, Section 512 of the DMCA limits intermediary liability for copyright infringement by providing a safe harbor to online service providers. To qualify for safe harbor, the platform must be one of the four service providers that are within the scope of DMCA, and meet certain conditions, most notably, responding to notifications of copyright owners who make the  online service provider aware of the  presence of infringing content on the provider's service.

Similarly, in 2000, the EU enacted the e-Commerce Directive, which provides immunity to internet service providers and shields three types of intermediaries:

  • Mere conduits that transmit information.
  • Caching services.
  • Hosting services that host user content, provided the host does not have actual knowledge of illegal activity or removes the illegal content upon becoming aware of it.

The e-Commerce Directive remains in force today, however, the EU has also enacted a new set of rules for digital services across member states under the Digital Services Act. The new law preserves the protections for the above categories of intermediaries but also introduces new obligations.

AI in cyberspace

The digital technologies we use today were perhaps only a science fiction concept back in the days of ARPANET — a 1960s U.S. government project marking the first use of packet switching technology —  that had an initial network of only four computers at four U.S. universities. Today, Big Tech dominates much of the digital world, has built vast empires through platforms accessed by billions of users worldwide and drives the global digital economy. Their services enable the hosting of a vast array of content.

The rise of AI, especially generative AI, as yet another revolutionary technology — that carries both immense promise and a significant threat of harm much like the internet — not only makes the internet more interactive and personalized but also tests the boundaries of the immunity intermediaries were offered during the early days of the internet.

The recent Gonzalez v. Google case not only brought into question the scope of the immunity offered under Section 230 of the CDA, but also demonstrated how much automation has evolved the internet since its inception. After the 2015 terrorist attacks in Paris, Google was sued by the family of one of the victims. They claimed YouTube, owned by Google, hosted radicalizing videos that incited violence and said YouTube's algorithms promoted this content to users whose "characteristics indicated that they would be interested in ISIS videos."

In holding that the plaintiffs did not make a viable claim for Google's direct or secondary liability in aiding and abetting ISIS, the U.S. Supreme Court deemed it unnecessary to address the question of whether big technology companies such as Google can be held liable under Section 230 for the content their recommendation algorithms show to certain users. In other words, the scope of Section 230 remains ambiguous.

However, since 2022, the internet has moved beyond recommendation algorithms to generative AI. Popular chatbots like ChatGPT that rely on user prompts to generate outputs have already led to defamation lawsuits. In Ireland, most recently, Microsoft's MSN web portal featured a news article with the headline, "Prominent Irish broadcaster faces trial over alleged sexual misconduct" and a photo of Irish DJ Dave Fanning, who was not the broadcaster in question. It turned out a journalism outlet called the BNN Breaking used an AI chatbot to paraphrase an article from another news site that was eventually promoted on MSN.

In a 2023 U.S. case, a radio host sued OpenAI in 2023 for alleged defamation by ChatGPT. In another case, the plaintiff claimed when searching for his name, Microsoft's Bing returned an AI-generated summary that showed facts about him mixed with those of a namesake who once pleaded guilty to seditious conspiracy. Although Section 230 of the CDA was not invoked, the claims are similar to ones that have historically given rise to defenses under Section 230. If the defense is invoked in future cases, courts will be faced with determining whether providers of generative AI tools can be held liable for content generated by their AI systems through user prompts.

Generative AI is likely to raise challenges concerning the application of Section 230, especially given the design of generative AI tools. To generate an output, generative AI systems require a prompt from the user. Whereas Section 230 exempts an interactive computer service from being held liable for third-party user-generated content, generative AI systems and users depend, to an extent, on each other for the creation of content. As such, one potential challenge for the application of Section 230 would be determining if the provider of generative AI is responsible for creating any illegal or harmful content. That determination will likely also depend on the specific application of the generative AI tool or system.

The buzz around generative AI is already leading to commentaries on whether the immunity under CDA Section 230 and the safe harbor of DMCA Section 512 should cover this new technology. Additionally, the DSA has been enacted to regulate the ever-evolving digital landscape with new obligations while balancing innovation by preserving the liability exemption under the e-Commerce Directive. Parts two and three of the article series will unpack the various elements of the U.S. and EU legislations, respectively, to assess their impact on and potential application to generative AI.

Uzma Chaudhry is the IAPP AI Governance Center Fellow.