AI and digital governance: Platform liability laws in the US


Contributors:
Uzma Chaudhry
CIPP/E
Former AI Governance Center Fellow
ATI
The internet has become a vast digital ecosystem where emerging technologies, such as artificial intelligence, increasingly shape online social experiences. Today's internet is governed by a complex web of old and new regulations. As AI continues to play a greater role across the internet, specifically through generative AI, it compounds and accelerates the confusion associated with overlapping and sometimes conflicting rules.
One prominent example is the interaction between generative AI and Section 230 of the Communications Decency Act. This law has been lauded as the Magna Carta of cyberspace and criticized for ruining the internet. Indeed, generative AI is also raising copyright challenges, with exemptions from liability under Section 512 of the Digital Millennium Copyright Act 1998 worth noting.
Section 230
Section 230 provides federal immunity to providers and users of interactive computer services. That is, it prevents the providers and users from being held legally responsible for information provided by a third person. This immunity is not absolute. Criminal, intellectual property, state, communications, privacy and sex trafficking laws remain unaffected by Section 230.
The courts have interpreted this immunity broadly, allowing for early dismissal of cases. The immunity emerges specifically from two separate paragraphs of Section 230(c), which the courts have interpreted as creating two distinct liability shields.
Contributors:
Uzma Chaudhry
CIPP/E
Former AI Governance Center Fellow
ATI