The Council of Europe's Framework Convention on Artificial Intelligence is the first legally binding international treaty on artificial intelligence, and it sets forth a host of requirements for what future AI legislation in signatory jurisdictions, such as the EU, U.S. or U.K., among others, should look like soon. For those in the EU and those close to the EU AI Act, the convention's requirements will not come as a surprise or additional burden, as the AI Act goes above and beyond. The convention nevertheless holds many clues for what a private enterprise's future regulatory burden might be, even if it is outside the EU's jurisdiction.

The CoE's group of 46 members overlaps the EU's membership and extends to include many other states that share a cultural, political or geographical proximity, such as the U.K., Andorra and Ukraine. The council should not be confused with the European Council and the Council of the European Union, which are parts of the EU. The CoE regularly works on multilateral agreements, often with an eye toward extending and enforcing fundamental human rights.

While members of the council are involved in negotiating conventions, several nonmembers and observing states, such as Argentina and the U.S., are regularly involved in negotiations and can bind themselves to the outcome of the treaty. Past conventions negotiated by the council include the European Convention on Human Rights and Convention 108 on processing personal data. These multilateral agreements have had lasting effects on the party states; the European Court of Human Rights has passed more than 16,000 binding judgments since its establishment in 1959.

The process for negotiating the Framework Convention is similar to previous treaties from the council. The negotiations started in 2019 with the creation of the ad-hoc Committee on Artificial Intelligence, which was then replaced by the Committee on Artificial Intelligence. The treaty was drafted through the committee with contributions from all members, the EU, as well as nonmember and observing states. After the treaty was finalized in May 2024, it opened for signatories in September 2024.

Multilateral AI governance agreements

Although the convention bills itself as the world's "first-ever international legally binding treaty" in the field of AI governance, it is not the first multilateral agreement. The Organisation for Economic Co-operation and Development's Principles for Trustworthy AI, which largely overlap with the convention's, were adopted in 2019 and revised in May 2024. The Bletchley Declaration also precedes the convention, having been adopted in November 2023 by a group of nations very similar in composition to the signatories of the convention. The declaration, in contrast, represented voluntary commitments by the adherents to cooperate on AI safety.

A May 2024 follow-up meeting in Seoul led to the Seoul Declaration, which furthered the goal of cooperation in the field of AI safety through commitments to create AI safety institutes and cooperate on research for AI safety. This has already been done by many of the adherents, with a notable instance where the U.S. and EU AI safety institutes are cooperating.

The Hiroshima AI Process and its agreements by the G7 members also precede the convention, although like the OECD and Bletchley/Seoul agreements, they are not legally binding. Two documents came out of the Hiroshima AI Process meeting in October 2023: principles for the use of AI, which are largely based on the OECD principles, and a code of conduct, which asks organizations to voluntarily adhere to various suggestions around risk assessments and mitigations and adopt enhanced transparency around AI use and any AI accidents, as well a variety of governance, privacy and security measures. Although these agreements are not legally binding, the evolution of discussions among the adherents, who often were similar in composition, shows stability over the past five years in the understanding of risks and how to address them.

Committing to adopting rules for AI governance

Signatories to the Convention Framework include Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the U.K., Israel, the U.S. and the EU. Other countries, such as Canada, took part in negotiations, but did not sign the treaty with the first round of signatories. By signing this treaty, the parties committed to adopting legislation that conforms to the minimum requirements.

The relevant national legislatures of the parties are expected to adopt conforming legislation in the coming years. Unlike the convention on human rights, the enforcement mechanisms are not as strong. To enforce the convention, the signatories will reconvene and determine whether they are compliant. Although it is likely the EU and its member states already fulfill their obligations under the treaty, no concrete comprehensive legislation introduced in either the U.S. or the U.K. fulfills these requirements.

The treaty lays out a handful of principles that should exist in each party state's domestic legislation concerning the development, use and governance of AI by public and private entities. These are human dignity and individual autonomy, equality and nondiscrimination, respect for privacy and personal data protection, transparency and oversight, accountability and responsibility, reliability, and safe innovation.

Most of these should be familiar to those following the various national AI policies released in the preceding years, which can be found in IAPP's OECD, whose adherents largely overlap with the signatories of the convention. The definition AI in the convention also echoes the definition put forth by the National Institute of Standards and Technology, a version of which has also been used by the OECD, the AI Act and the Colorado Consumer Protections for AI, giving further evidence to this relatively wide definition becoming the international standard.

One of the principles that might stand out, however, is "safe innovation," which in the CoE's documentation means states should create room for innovation by allowing for controlled development and testing of AI systems through temporary suspension of regulations in a targeted manner. This is essentially what is allowed under Article 57 of the AI Act with regulatory sandboxes. Another example can be found in Utah's AI Policy Act, which establishes the AI Learning Laboratory Program. Both the regulatory sandbox and learning laboratory have the same goals: to allow for AI development to test the boundaries of regulation and for regulators and lawmakers to learn and adjust regulations accordingly.

There is a carve-out for national security in the convention, which seems to be the consensus among legislators and regulators as well, who do not believe they should restrict the use of AI either in warfare or in areas like homeland security. While the use of AI and autonomous weapons are being discussed in other multilateral fora, such as the Group of Governmental Experts on Legal Autonomous Weapons Systems, it does not appear that many of the more robust AI governance and policy frameworks specifically regulate or restrict AI's use in national security applications.

Specific governance requirements

The Framework Convention lays out several governance requirements with compliance burdens that fall on developers and deployers of AI systems. These do not differ greatly from the AI Act. Although a distinction is not made between high-risk and minimal-risk AI, language is instead used to confer that risk should be considered when placing restrictions or obligations on AI systems.

AI systems should have documentation available to those using the system, and the documentation should be sufficiently thorough so users can challenge the AI system or its decisions. Parties need to make it possible for someone impacted by an AI system to seek redress through a government authority, implying a government authority must also exist to address a complaint.

Developers and deployers need to notify those interacting with the AI that they are indeed interacting with an AI system. They also need to conduct risk and impact assessments as well as possibly testing the system before release. Developers and deployers have to establish prevention and mitigation measures based on the assessments. Parties to the Framework Convention also have the option to introduce wider protections, as well as to ban any AI systems outright, as the AI Act has done.

Organizations preparing to comply with current comprehensive AI legislation, such as the AI Act or Colorado Consumer Protections for AI, should take note of this treaty and ensure their policies and governance efforts preempt compliance with any legislation that comes from ratification of the convention.

This includes robust documentation that is available to the public or ready to be made public, risk and impact assessments for any AI system that could affect fundamental human rights and access to public services, implementing risk and mitigation plans based on these assessments, and finally ensuring transparency in the use of public-facing AI systems, including notices to those interacting with an AI system. Many of these measures are already required by comprehensive AI legislation, so there should not be a large additional burden.

The CoE's Framework Convention represents a culmination of years of negotiations but also another iteration of multilateral agreements by familiar characters, i.e., the U.S., U.K. and EU. While this treaty may appear to be a further iteration of previous agreements, such as the OECD's Principles for Trustworthy AI, the Bletchley/Seoul Declarations and the Hiroshima AI Process, it is the first legally binding treaty of its kind. Signatories of the convention have committed themselves to adopting legislation that conforms to the agreement, although they are able to further restrict the use of AI domestically, as the EU has done.

Richard Sentinella is the AI governance research fellow at the IAPP.