Editor's note: In this first of a three part series, the authors explore what it means to be an AI lawyer. Part two of the series will focus on contractual responses and part three on privacy solutions.
Ever since OpenAI launched ChatGPT two years ago, the global economy has been awash with alternating waves of excitement and apprehension about the new technology. Recognizing the plethora of legal issues that artificial intelligence raises, companies scrambled for policies to govern AI adoption in organizations as well as contracts and procedures for integrating AI features up and down the tech supply chain.
Regardless of whether AI is transformative like the internet, dangerous like nuclear power, an economic boom as reflected in Nvidia's USD3 trillion valuation, or bust as skeptics continue to warn, lawyers have been responding to AI opportunities and risks and devising novel contractual and governance mechanisms in real time.
AI law is neither just privacy nor intellectual property. It's both these fields, as well as multidisciplinary expertise that cuts across every practice area at major law firms. In an April 2023 policy statement, several U.S. regulators, including the Department of Justice, Federal Trade Commission, Consumer Financial Protection Bureau and Equal Employment Opportunity Commission, emphasized that while the U.S. does not have a specific AI law in the mold of the EU AI Act, "there is no AI exemption to the laws on the books." In other words, AI is governed by existing laws on privacy and IP; fair employment, credit, housing and insurance; product safety and torts; and the list goes on. Every law is AI law.
Corporate law
For starters, AI lawyers advise on transactions and acquisitions at the cutting edge of the AI and generative AI space. Deals in this space require craftsmanship and innovation in a way that few, if any, traditional deals do. They are focused on knowledge and data. Lawyers who have gotten accustomed to acquisitions in which buyers buy software or code now need to frame transactions in which the central asset is an AI model. This has profound implications for deal structure, representations and warranties, and covenants and indemnities.
IP and privacy
AI is synonymous with massive amounts of data. Without it, there would be no "I" in AI. AI companies gather data from a myriad of sources, including scraping data from publicly available sources, licensing it under commercial terms, commissioning it from open-source databases and collecting it from their own customers. Data could be confidential or customer information, protected or unprotected by IP rights, or it could be personal data. That's where IP and privacy lawyers come in. In corporate transactions and licensing deals, IP and privacy lawyers are charged with:
Defining terms. AI lawyers must understand more than just basic AI terms like "large language models" and "reinforcement learning from human feedback." They negotiate over concepts like "retrieval-augmented generation," enhancing the accuracy and reliability of a LLM with a customer's data in a silo; "model weights," levers that help calibrate the performance of the model to produce behavior that fits the developer's goals; "floating point operations," a metric that quantifies the computational capabilities of an AI systems; and "synthetic content," information that has been altered or generated by AI to train AI models.
Representations and warranties. Unlike any other field in law, AI still doesn't have canned representation and warranties. Rather, AI lawyers are drafting the canon these very days. In corporate transactions, companies are requested to verify how they collect and use data for training AI models, whether their AI technologies respect IP rights, how they deploy AI and generative AI in their operations, what governance mechanisms they have in place to mitigate AI risks, how they secure AI from insider and outside threats, whether they have policies to identify and mitigate bias in training data or models, and more.
Licensing deals. To really understand AI, you need to read — and draft — the licensing agreements that govern this new technology. Upstream and downstream licenses along the AI and generative AI value chain are where lawyers apportion data and IP rights, impose privacy and cybersecurity obligations, and allocate risk. To draft these contracts, lawyers must know the ins and outs of commercial and enterprise versions of foundation models, including key differences between public and enterprise versions of AI tools.
Vendor management. Closely related to licensing deals, supply chain management is a hallmark of AI. Many companies are not just developers or deployers of AI. They wear both hats. That means lawyers need to manage the risks of AI integration not just by their clients but also by their clients' vendors and those vendors' vendors.
Policies. Like it or not, employees are going to use generative AI for everyday tasks. Prohibiting them to do so is tantamount to prohibiting them from using Google. Since the beginning of last year, companies have been setting forth nuanced policies to recognize and mitigate AI risks such as inaccuracy, bias and discrimination, data leakage, cybersecurity, IP in inputs and outputs, privacy, and more. A good policy accepts the reality of widespread AI, offering employees guardrails and best practices to mitigate risks to safety, cybersecurity, IP and privacy.
Cybersecurity
Cybersecurity is existential for AI, as it is for other tech architectures. AI expands the attack surface and introduces new cyber risks such as data poisoning and prompt or bias injection. The National Institute of Standards and Technology AI Risk Management Framework, with its crosswalks to the NIST Cybersecurity Framework, provides key governance tools.
Regulation
While it's true all law is AI law, policymakers in Washington, D.C., and state capitals are working to enact AI-specific legislation. AI lawyers must stay abreast of the latest developments. The trajectory is similar to privacy 20 years ago. While Europe opted for omnibus privacy legislation with the EU Data Protection Directive in 1995 and General Data Protection Regulation 2018, the U.S. enacted an alphabet soup of sector-specific privacy laws augmented by hundreds of state security and privacy statutes. For AI too, the EU quickly passed dedicated legislation with the EU AI Act. In the U.S., President Joe Biden set forth an AI Executive Order, as states, beginning with Colorado's AI Act, adopted laws focused on automated decision-making, AI in consequential decisions, algorithmic discrimination, frontier and foundation models, generative AI, and more. These laws will require companies to comply with transparency and disclosure obligations, conduct risk or impact assessments, and adopt AI governance programs. They will also institute new individual rights, such as a right to explanation, to contest adverse decisions and to opt out of AI decision-making outright.
Other legal fields
This covers just the tip of the iceberg of AI law. AI requires expertise in labor and employment law. Indeed, one of the first-ever laws on AI is New York City's Local Law 144 of 2021, which governs automated employment decision tools. The EEOC issued a 2023 guidance documents on "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures." Trade regulation is equally important. Consider that China is an AI superpower, that the battle for control of the chip making market has strong geopolitical tones and that EU AI lawmakers make efforts to accommodate local market players. AI impacts financial regulation. The CFPB has been extremely active in this space, issuing guidance requiring creditors to provide consumers with accurate and individualized explanations for AI-generated adverse actions and rules to ensure accuracy and accountability in the use of AI in home appraisals. And AI has become central to health care regulation, as many AI innovations focus on life sciences, clinical and genetic research, and health care solutions.
Conclusion
Since November 2022, tech lawyers have become AI lawyers, as AI permeates every business transaction, licensing deal and governance process. While IP and privacy play a major role in AI, with its data-intensive architecture, other legal disciplines contribute to this rapidly growing body of law and regulation.
Steve Charkoudian, is chair of Goodwin's Data, Privacy and Cybersecurity Group. Omer Tene is a partner at Goodwin.