Even as philosophers debate whether artificial intelligence is the gateway to a utopian future or an existential threat, and scientists drive the quest to artificial general intelligence, lawyers — in the trenches of the here and now — negotiate finely crafted AI agreements.

Since OpenAI launched ChatGPT two years ago, a wave of AI transactions has hit the market with AI provisions proliferating in licensing deals, master service agreements, corporate transactions and more. Unlike practitioners in other legal areas, who can draw on a rich set of templates and forms for language on issues ranging from representations and warranties to liability and indemnities, AI lawyers operate in a greenfield rife with technological change and regulatory uncertainty.

While several states have passed AI legislation, most AI risks are not yet regulated, at least not by AI-specific laws, so lawyers copiously document and negotiate a dizzying array of potential contingencies. In addition to requiring AI developers and deployers to comply with the law, lawyers draft contracts that require parties to make warranties and promise indemnities to minimize risks to copyright, trade secrets, privacy, fairness and equity, cybersecurity, and more.

Agreements with LLM developers

OpenAI's release of ChatGPT launched a feeding frenzy with businesses lining up to adopt generative AI tools, either as deployers, to enhance existing business processes, or as developers, integrating generative AI into their products. In each case, companies need to license generative AI tools from large language model developers, such as OpenAI, Microsoft, Google, Meta and Anthropic.

LLM developers offer off-the-shelf agreements for individuals or small business users, enterprise options for larger players, and negotiated deals for just the largest customers who have specific needs. The quid pro quo is typically additional contractual protections for a higher price.

LLM developers' standard agreements diverge between consumer AI products (business-to-consumer), which leave developers significant leeway to use or reuse customer information, and enterprise versions (business-to-business) that are more protective. When onboarding generative AI, businesses seek to shield their customers and their own proprietary content and information from risks presented by developers using that data to train AI models, potentially benefitting their competitors or even appearing as output somehow somewhere down the road.

To mitigate these concerns, Microsoft, for example, makes strong commitments in the paid version of its Azure OpenAI service, to protect customer information in both inputs and outputs. In highlighted text on the product page, Microsoft commits, Your prompts (inputs) and completions (outputs), your embeddings, and your training data: are NOT available to other customers; are NOT available to OpenAI; are NOT used to improve OpenAI models; are NOT used to train, retrain, or improve Azure OpenAI Service foundation models; and more (emphases in the original).

Similarly, AI platforms are now offering customers a zero data retention option, typically also offering to automatically screen customer content for safety purposes. OpenAI, for example, explains, With zero data retention, request and response bodies are not persisted to any logging mechanism and exist only in memory in order to serve the request.

Such commitments enable businesses in even highly regulated industries, such as banks, health care facilities or law firms, to adopt generative AI tools.

Moreover, in response to concerns over potential intellectual property infringements by generative AI outputs, LLM vendors, including Microsoft, Google and OpenAI, now offer to indemnify their paying customers against any future claims, particularly from infringement of any third party's intellectual property. However, the types and scope of indemnification are limited, for example by a customer obligation to follow product instructions to the letter. And customers may be required to indemnify vendors against claims arising from the customers' use of generative AI, including for violation of laws, third parties' intellectual property or vendor policies.

Regardless of vendor commitments, the deployment of AI presents potential liabilities that can be mitigated only through proper policies and procedures. Contractual limitation of liability clauses typically start with vendors disclaiming or capping any liability for indirect, incidental and consequential damages, and on the flip side, not limiting customer liability. Here too, customers can opt for higher priced enterprise options that offer higher liability caps and limited customer liability.

Reps and warranties

Lawyers who negotiate major corporate transactions, from asset and share purchases to mergers and acquisitions, seek to capture and allocate any conceivable risk. Many of these risks have been documented for decades, with contractual language becoming standardized on issues ranging from proper incorporation to labor and employment, tax, intellectual property and environmental liability. Parties typically negotiate on the margins of such baseline language, based on specific information, needs or leverage.

Of course, no one has historically included contractual language about generative AI, a technology that was unheard of before 2022. Increasingly, over the past two years, parties have converged around a new canon of AI representations and warranties. One commonly used model for corporate transactions is the National Venture Capital Association Stock Purchase Agreement. Last updated in October 2024, this form defines Generative AI Tools as generative artificial intelligence technology or similar tools capable of automatically producing various types of content (such as source code, text, images, audio, and synthetic data) based on user-supplied prompts. 

According to its representations around generative AI, the acquired company has used generative AI tools in compliance with all applicable licenses, agreements and laws. Additional representations attest to the company not misusing data or intellectual property in generative AI inputs or outputs. That means not using personal information, trade secrets or confidential information of any kind in any prompts or inputs to generative AI tools, except in cases where such Generative AI Tools do not use such information, prompts or services to train the machine learning or algorithm of such tools or improve the services related to such tools. A fallback position requires companies to represent and warrant that any data used for model training has been deidentified.

Generative AI outputs also present intellectual property and privacy risks. Consequently, the NVCA form requires companies to attest to not using generative AI tools to develop any material company-controlled IP that the company intended to maintain as proprietary in a manner that it believes would materially affect the company's ownership or rights therein.

These are just the basic representations. In deals involving AI technologies, reps expand to address additional specifics about model training, fine tuning model weights, retrieval augmented generation, floating point operations and the use of synthetic content.

AI addenda

Similar to vaunted data protection agreements, which have become a staple of business agreements, AI addenda are now appearing in master service agreements, terms of use, and licensing deals, sometimes even popping up long after the execution of such contracts. They are intended to address a hodgepodge of AI-related issues, ranging from bias and discrimination in algorithms or training datasets to the protection of intellectual property. The AI addendum may apply to a vendor's provision of any product or service that constitutes, incorporates or otherwise utilizes generative AI.

A vendor may be asked to commit: to comply with any laws relating to the development, training or integration of AI; that its AI systems are fit for purpose and materially free of defects; that it implemented risk management frameworks in its design, development or deployment of an AI system; that it took all steps necessary to prevent bias or discrimination in training data or outputs; to ensure its AI system is overseen by humans who can intervene, override or reverse a particular decision or output; to notify the customer of any issue affecting the reliability, safety or security of the AI system; to provide the customer with adequate training, instructions, documentation and assistance to enable it to properly use the AI systems; to maintain and retain logs of events in the operation of the AI system.

Other common requests include provisions on intellectual property, including commitments by the vendor with respect to both input and outputs as well as protection of customer data, the content of prompts and generated materials. However, vendors may negotiate for rights to use, prepare derivative works based on, and publish or distribute the prompts, or for keeping rights to use the outputs as training data.  

In AI addenda too, model training plays a central role. Vendors often commit to not using customer data, prompts or output to customize, train, or otherwise improve their AI models without prior consent. To enhance transparency, a vendor may agree to share with its customers a model card, that is, a document accompanying an AI system that provides an evaluation of the AI system in a variety of conditions, discloses the context in which it's intended to be used, and details the AI model's performance and relevant performance evaluation procedures.

At the same time, customers are requested to commit to: not providing an AI system with inputs or prompts comprising sensitive personal information (or protected health information under the U.S. Health Insurance Portability and Accountability Act; not using generative AI to mislead any person that output was solely human-generated; generate spam, fraudulent or inappropriate content for dissemination; or violate any technical documentation, use guidelines, or parameters.

In addition, vendors often disclaim any warranties about the outputs of generative AI being accurate or reliable, and may warn deployers to avoid relying on the output without independent verification or for any consequential decisions.

Conclusion

As with any emerging technology, generative AI presents risk that businesses allocate through contracts. While the dust hasn't settled around the AI contracting arena, trends, forms and best practices have emerged requiring parties to an AI transaction to deploy contracting expertise. 

Steve Charkoudian is chair of Goodwin's Data, Privacy and Cybersecurity Group.
Omer Tene is a partner at Goodwin.

Editor's note: This second in a three-part series explores contracting about artificial intelligence. Part one discussed what it means to be an AI lawyer while part three will focus on AI governance.