This article is the second installment of a three-part series that will unpack platform liability laws in the U.S. and the EU and analyze their potential application to generative AI. Part 1 explores platform liability. Part 3 will analyze the landscape in the EU.

The internet has become a vast digital ecosystem where emerging technologies, such as artificial intelligence, increasingly shape online social experiences. Today's internet is governed by a complex web of old and new regulations. As AI continues to play a greater role across the internet, specifically through generative AI, it compounds and accelerates the confusion associated with overlapping and sometimes conflicting rules.

One prominent example is the interaction between generative AI and Section 230 of the Communications Decency Act. This law has been lauded as the Magna Carta of cyberspace and criticized for ruining the internet. Indeed, generative AI is also raising copyright challenges, with exemptions from liability under Section 512 of the Digital Millennium Copyright Act 1998 worth noting.

Section 230

Section 230 provides federal immunity to providers and users of interactive computer services. That is, it prevents the providers and users from being held legally responsible for information provided by a third person. This immunity is not absolute. Criminal, intellectual property, state, communications, privacy and sex trafficking laws remain unaffected by Section 230.

The courts have interpreted this immunity broadly, allowing for early dismissal of cases. The immunity emerges specifically from two separate paragraphs of Section 230(c), which the courts have interpreted as creating two distinct liability shields.

Section 230(c)(1) shields platforms from liability for the content of third-party users hosted by those platforms. Section 230(c)(2) shields platforms from liability for any action that has been taken voluntarily, in good faith, to restrict access to material the provider or user considers lewd, obscene, lascivious, filthy, excessively violent, harassing or otherwise objectionable, whether or not such material is constitutionally protected.

The rationale for the immunity was contrasted with its alternative in a case known as Zeran v. America Online. Namely, imposing tort liability on online service providers for communications of others would be an intrusive government regulation of speech in the burgeoning cyberspace. Therefore, immunity was provided for any cause of action that would make interactive computer services liable for information that originated from third-party users.

This article focuses on the first type of immunity under Section 230(c)(1).

Section 230(c)(1)

This subsection states, "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Generally, Section 230(c)(1) applies when a three-step test is met: The provider is an interactive computer service, the claim treats the provider as a publisher or speaker of the content in question, and the content is created by another information provider.

Three key elements to Section 230(c)(1)

Interactive computer service. This has been broadly defined by statute as "any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server." This broad definition captures big online intermediaries, such as Google and Meta, and companies providing broadband and hosting services, among others.

Information content provider. This is also defined by statute and means "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service."

Treatment of provider as publisher or speaker. The terms publisher or speaker have not been defined by statute. Legal claims treating providers as a publisher or speaker have been construed both broadly and narrowly by the courts. In the Zeran case, the U.S. Court of Appeals for the Fourth Circuit held that Section 230 bars "lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content."

The U.S. Court of Appeals for the Ninth Circuit also explained this, noting "any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230." However, in recent cases, such as Lemmon v. Snap, the Ninth Circuit allowed a claim based on the "predictable consequences of designing Snapchat in such a way that it allegedly encourages dangerous behavior" and narrowed the scope by reiterating businesses that publish user content on the internet do not have an all-purpose get-out-of-jail-free card.

Section 230(c)(1) effectively distinguishes those who create content, like information content providers, from those who provide access to content, like interactive computer services. As such, for the latter to be held liable, the legal inquiry rests on whether the service provider developed the content.

Generative AI and CDA Section 230

If and when a Section 230(c)(1) defense is invoked in a generative AI case, the legal inquiry will be determined in a fact-specific manner, as has been the case with previous Section 230 cases. However, as in all Section 230 cases, a generative AI case will also have to meet the three-part criteria for immunity to apply. While surpassing the first step may be relatively simple, given that the term interactive computer service has been defined broadly, the second step is likely to be the more complex inquiry. That is, determining whether an interactive computer service, such as a provider of generative AI services, is also an information content provider. In other words, can the generative AI service provider be wholly or partially responsible for the creation or development of the information?

A recent Congressional Research Service report analyzes the potential application of Section 230 to generative AI. It concludes the application of Section 230 would "vary across different generative AI products, different applications of a single product, and different legal claims about an application or product."

The report also summarizes various commentaries emerging on this subject. With reference to a recent paper, it identified that not all legal claims would turn on the same aspects of generative AI. Some AI products "operate on something like a spectrum between a retrieval search engine (more likely to be covered by Section 230) and a creative engine (less likely to be covered)."

Another commentary focuses on the text-generation abilities of large language models. Although they require user prompts, LLMs can also hallucinate and create new text, especially if the output does not regurgitate information from training data. Such a conclusion would make this aspect of LLMs akin to the findings of Federal Trade Commission v. Accusearch, in which the website operator was held liable under Section 230 for "the development of the specific content that was the source of the alleged liability."

The Congressional Research Service report also argues Section 230 should protect products like ChatGPT because they rely entirely on third-party input and "use predictive algorithms and an array of data made up entirely of publicly available information online to respond to user-created inputs." Therefore, whether a generative AI service provider is also an information content provider would ultimately depend on the specific facts and aspects of generative AI in dispute.

The final hurdle for applying Section 230 to generative AI will be determining if a generative AI service provider can be considered a publisher or speaker. This will likely depend on how broadly or narrowly the court construes the terms publisher or speaker in a particular legal claim. A recent paper argues it is possible some generative AI functions, such as algorithms used to filter and display personalized content based on user inputs, could fall within Zeran's "traditional editorial functions." However, it also argues AI systems could be exposed to claims based on negligent design or product liability that is grounded in the system's own conduct, rather than information contained in training data and, therefore, could fall outside the protected Section 230 publisher activity.

Section 512 of the DMCA 1998

Another law enacted to promote and safeguard digital innovation in the U.S. was the DMCA. It amended copyright law in the U.S. to address the relationship between copyright and the internet. An important update to the law was providing safe harbors to qualifying service providers through Section 512. It limits the liability of four types of service providers, shielding them from monetary liability for direct, contributory and vicarious infringement resulting from the action of their users in exchange for cooperating with copyright owners to remove infringing content and meeting certain conditions expeditiously.

Safe harbors only apply to qualifying service providers who perform certain functions, which are defined under Sections 512(a), (b), (c) and (d). As such, Section 512 has two definitions of service providers.
Mere conduits

To qualify as a mere conduit, identified under Section 512(a), the narrow definition of service provider must be satisfied. This definition is "an entity offering the transmission, routing, or providing of connections for digital online communications, between or among points specified by a user, of material of the user’s choosing, without modification to the content of the material as sent or received."

Caching, hosting and linking

To qualify as a service provider for caching, hosting and linking under Sections 512(b), (c) and (d), respectively, a broad definition must be satisfied. A service provider in the context of these three provisions means "a provider of online services or network access, or the operator of facilities therefore, and includes an entity described in subparagraph A [512(k)(1)(A)]." Courts have previously determined service providers such as Amazon and YouTube satisfy this definition.

The conditions all service providers must meet include maintaining a policy of terminating repeat offenders and accommodating technical measures used by copyright owners to identify infringing content.

However, for caching, hosting and linking service providers to be eligible for the immunity, they must not have actual knowledge of the infringing content, not receive financial benefit directly attributable to the infringing activity, and comply with notice and takedown requirements. Once a service provider is notified by a copyright owner of the existence of infringing content, to qualify for immunity, it must respond to the notice either by taking down the content or by letting the copyright owner know it does not think is the content is infringing. As such, service providers exercise powerful discretion here. They are deciding whether the infringing content violates someone's copyright or if it may fall within an exception like fair use. This affects what stays online and what is removed, especially as the DMCA does not mandate specific procedures for making such determinations.

Generative AI and DMCA Section 512

The potential application of safe harbors to generative AI will depend on various legal factors. First, it must be determined whether generative AI providers qualify as one of four service providers. If a particular application or use case of generative AI falls under Section 512(b), (c) or (d), a notice and takedown mechanism must be operationalized to qualify for the safe harbor to deal with instances of AI returning infringing work. However, this final point has triggered a much larger debate on fair use. Currently, there is pending litigation focused on whether using copyrighted work to train AI models, which enables them to generate outputs containing copyrighted content, amounts to fair use. This is a topic discussed in depth in the recent AI Governance in Practice Report 2024.

As mentioned previously, when service providers are notified of infringing content, they can either take it down or decide to keep it online if they decide it is not infringing content for reasons such as fair use. With litigation pending on this point, service providers may not be able to reject takedown requests based on fair use until the court's final decision. As such, it is difficult to determine the scope of the application of DMCA Section 512 to generative AI.

Conclusion

As AI continues to integrate itself into the digital ecosystem, organizations will have to navigate a complex legal landscape. Implementing a robust AI governance program that targets the organizational policies and system-level intricacies is relevant to the use case, and the industry the organization operates in can greatly facilitate such navigation.

In the contexts of CDA Section 230 and DMCA Section 512, generative AI has created legal uncertainties that will be resolved in due course. However, in the interim, organizations can take preventative measures by implementing AI governance to mitigate the risk of violating the law. For instance, to the extent of CDA Section 230, a lot depends on the specific application or aspect of a generative AI system. As such, developing technical and nontechnical documentation for the model and the system can provide insights into the generative AI system's role in creating content.

Moreover, organizations can also take steps to prevent the generation of infringing outputs and promote copyright safety. Organizations can explore incorporating technical guardrails, such as filters and classifiers to prevent copyright infringement. Webpages can also opt out of unwanted scraping to prevent copyrighted content from becoming part of the training data of foundation models.

Uzma Chaudhry is the former IAPP AI Governance Center Fellow.