Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

In late March, the Public Buyers Community of the EU published model contractual clauses for the public procurement of artificial intelligence, which are rapidly becoming a reliable source of best practice standards for private-sector AI contracting. An initiative by the Directorate General of Growth of the European Commission, the Public Buyers Community aims to advance economic growth, competitiveness and business development, particularly among small and medium-sized businesses.

As companies implement third-party risk policies as a part of their AI governance programs, these MCC-AI may influence procurement functions and guide customers' purchasing processes. The clauses largely follow the requirements for high-risk AI systems as defined under the EU AI Act.

It is critical to make a conscious risk-based decision grounded on a set of best practices as AI capabilities evolve and become more impactful. The MCC-AI, published in light and high-risk versions along with helpful commentary, demonstrate two risk tiers of complete model contractual clauses and represent a library of clauses that are useful to address specific risks.

The MCC-AI can be beneficial to private sector companies in accomplishing several goals, including aligning standards for best practices in AI contracting, creating a blueprint for risk management, ensuring international compliance, demonstrating supply-chain due diligence and mitigating litigation risk. Private sector companies and public sector organizations are not legally required to use the MCC-AI.

A comprehensive approach should future-proof the procurement process to be able to adapt to new and changing AI system capabilities. Purchasers should think about how the MCC-AI specifically meet the needs of their AI use cases. Vendors should examine existing privacy, security, AI governance and compliance measures and/or standard terms to identify the measures they can and cannot honestly provide — as well as which provisions would be irrelevant to their services.

Similarities between the MCC-AI-High-Risk and MCC-AI-Light

The MCC-AI-High-Risk are intended for procuring AI systems defined as high-risk under the AI Act, those that "may pose a high risk to the health and safety or fundamental rights of persons." The MCC-AI-Light are for those not classified as high-risk, but still requiring "transparency obligation or requirements for explanation of individual decision-making by the public administration."

Both sets of clauses address:

  • A 12-step risk management framework, including testing, documentation and continuous monitoring throughout the agreement term
  • Requirements for data governance, including relevant, accurate datasets in consideration of the context
  • Requirements for documentation, including technical specs and instructions for use
  • The necessity to implement automatic logging and record-keeping of AI system operations, stored for appropriate periods consistent with data subjects' access rights
  • Clear and transparent information about AI system characteristics and limitations
  • Effective human oversight
  • Appropriate levels of accuracy, robustness, safety and cybersecurity
  • Basic compliance monitoring
  • "Immediate" corrective action requirements
  • Frameworks for deciding rights over categories of data sets
  • When and how data should be provided upon request
  • What to indemnify
  • The assessment of impacts to fundamental rights
  • Transparency regarding decisions

Differences between the MCC-AI-High-Risk and MCC-AI-Light

Only the MCC-AI-High-Risk contain provisions that require a quality management system, including a strategy for regulatory compliance and accountability; a conformity assessment prior to delivery of the AI system; and an AI register for public transparency and comprehensive audit details.

Certain provisions of MCC-AI-Light are enhanced in the MCC-AI-High Risk, including the option to continue risk management provisions after the contractual term, enhanced transparency to require disclosure of detailed performance metrics, the option to specify documentation language, and enhanced supplier and third-party data provisions with two-way indemnification structure.

How use case affects MCC-AI selection

The purpose of a contract is to allocate and understand risks and costs between parties. The use case for an AI system is fundamental and must be understood. The MCC-AI, therefore, cannot be a one-size-fits-all solution. In many cases, even the light version may be overkill.

For example, despite the intimidating breadth of the AI Act, most consumer-facing AI applications don't need MCC-AI at all because they would be low or minimal risk under the AI Act. These use cases include AI features of video games, spam filters, basic content recommendation algorithms and content generation tools — excluding deepfake creators — as well as broad categories of low-risk productivity and entertainment applications.

If a system is generating content for a marketing department, the risk relates to the effects of what the system will help create. If the content generation is not using personal data and the worst-case scenario is essentially poor campaign quality, then most of these clauses would either be irrelevant or redundant to provisions usually found in standard terms for an online service. Such a use case is minimal risk.

On the other hand, if the system gathers personal information and constructs behavioral profiles, serving content or on a schedule personalized to the target, then using the MCC-AI to address specific risks makes sense. For example, both the MCC-AI-Light and MCC-AI-High-Risk are designed to mitigate privacy risks and comply with the EU General Data Protection Regulation. That said, a well-constructed data protection addendum — as already offered by many software-as-a-service providers — would likely already meet the same requirements.

There could be a non-high-risk scenario that nevertheless poses serious AI risks to a business, such as a system intended to predict important business outcomes — like logistics or demand and/or inventory predictions. In such a scenario, the key risks would center around the accuracy and usefulness of the AI system.

Therefore, much of the MCC-AI-Light— such as cybersecurity and data governance, integrity and accuracy — would be helpful from the buyer's perspective. Even some of the MCC-AI-High-Risk might be a good idea, such as pre-delivery conformity assessment.

Other approaches

For vendors, these requirements don't necessarily mean the contract has to look like these MCC-AI. For example, cybersecurity requirements can be met with existing security terms and agreeing to maintain ISO 9001 certification can demonstrate a sufficient quality management system.

Alex Wall, AIGP, CIPP/E, CIPP/US, CIPM, FIP, is principal attorney at Wall Law.