The EU Artificial Intelligence Act was carefully crafted with a risk-based approach to regulating AI — heavily regulating high-risk AI models, lightly regulating generative AI models with systemic risk and leaving low-risk models alone.

Not all purchasers of AI functionality have adopted this risk-based approach, however.

Driven by legitimate concerns over ethics, bias, data privacy compliance, automated decision-making, and intellectual property ownership or infringement, many businesses are beginning to dive deep into the AI solutions their vendors are offering.

Where such diligence efforts unearth high or systemic risk, procurers of these solutions should bolster their contracts with transparency and reporting requirements, audit rights and performance warranties. However, if the AI systems offered present little to no risk, it is unreasonable to demand that vendors comply with similar contractual terms.

In the months leading up to the final draft of the EU AI Act, one of the most significant points of contention among EU member states was the extent to which the proposed act would hinder innovation in the AI space. While in favor of regulation for high-risk models, France, Germany and Italy opposed what they deemed as overregulation of generative AI models, which they felt could give innovators outside the EU a leg up over the EU tech sector and generally chill innovation globally.

A tiered, risk-based approach

After intense negotiations, member states compromised with a tiered approach. In the final version of the act, some categories of AI systems are deemed too risky and are entirely prohibited — for instance, social credit scoring systems, emotion-recognition systems at work and in education, and AI used to exploit individual vulnerabilities.

Next are heavily regulated high-risk systems, which entail significant data quality, documentation and traceability, human oversight, accuracy, cybersecurity and robustness, conformity assessments, and government registration obligations.

Generative AI systems with "systemic risk," or high-risk potential, as determined by specific technical tools and methodologies, including the cumulative amount of compute used for its training, have slightly fewer, but still material, compliance obligations.

Finally, providers of standard generative AI systems that interact with individuals must just ensure users are aware they are interacting with an AI system.

Providers or deployers of AI systems not falling into any of the above categories may voluntarily commit to codes of conduct developed by the industry, including those related to AI governance.

This tiered approach ensures the riskiest systems are subject to intense scrutiny while still ensuring the vast majority of AI applications remain relatively unencumbered. Legislation under consideration in various U.S. states and the U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued in October 2023 are leaning toward a similar risk-based approach, careful not to overregulate or burden low-risk models and encourage innovation.

Trust, transparency and responsible deployment

Just as regulators globally are careful not to create undue burden on providers and deployers of low-risk systems, procurers of AI systems should be too. As my organization's AI compliance lead, I have reviewed many vendor questionnaires and contractual requirements in recent months that assume the organization will place the same rigorous compliance requirements on relatively low-risk models to those the AI Act imposes on high-risk models.

In our case, we either develop proprietary purpose-specific models for discrete tasks, such as suggesting assets or datasets to review or identify similar datasets, or we leverage third-party generative AI models, such as Google Vertex AI, to perform simple tasks like creating SQL queries based on simple, plain language prompts. These use cases involve significant user review and control and neither contain automated decision-making functionality nor produce suggested outputs that have any material impact on individuals or business outcomes.

Given the low-risk nature of these models, we would not be able to meet some of the technical documentation, conformity assessments and reporting requirements imposed on high-risk models. While procuring organizations are looking for a one-size-fits-all approach to AI system standards with their suppliers, it is not appropriate to ask organizations to meet standards matching regulatory requirements that do not apply to them or their products or services. The suppliers of low-risk AI technologies will not be able to meet those demands as such requirements would inevitably impede the commercialization of such technologies.

Using a contractual sledgehammer to crack a simple nut may be a reaction to a need for clarity regarding the specific nature of the purchased AI systems. Providing a certain amount of transparency in our AI deployments can reduce friction between vendors and procurers.

First, we tell customers and prospects that any AI functionality embedded in our platform is subject to the same confidentiality, security or data privacy compliance obligations as the rest of the platform, and the processing of personal data, if any, within our AI systems falls squarely under the scope of our data processing agreements. This simple statement immediately eliminates some of our customers' most fundamental concerns with AI systems.

Second, we provide significant information on the responsible deployment of AI, including details on our specific AI features through AI Fact Sheets on our Trust Center. Each fact sheet discloses the intended purpose of the AI feature, the nature of the training data leveraged and whether we use input data to retrain the system, the input data, the output data, the level of human oversight, whether personal data is processed in the system, and data storage and retention protocols. Where we deploy third-party generative AI models, we provide as much information as we can obtain from the models providers.

Because our AI/machine learning team works carefully to ensure the quality of training data leveraged and the accuracy of the outputs, and because we maintain a strong internal AI governance process, we are happy to commit to maintaining our own internal procedures to assess in good faith new AI functionality in our offerings from the standpoint of performance, compliance, legality, security and ethics.

These transparency measures and commitments alleviate customer concerns, result in a smoother negotiation and avoid unnecessarily burdensome contractual commitments which could detract from our ability to rapidly produce new low-risk AI solutions.

A delicate balance

The EU AI Act's tiered system, tailored to the risk levels of different AI systems, reflects a delicate balance between innovation and oversight.

Procurers should take note and avoid imposing high-risk compliance standards on low-risk models, which can stifle innovation.

Vendors of low-risk models who embrace transparency initiatives can encourage procurers to align their contracting process to the more nuanced regulatory environment.