On 12 July 2024, the Official Journal of the European Union published the EU Artificial Intelligence Act, which aims to comprehensively regulate the design, deployment and use of AI systems in the EU.
Key provisions of the act will become applicable to organizations over the course of the next three years, depending on their roles and the risks and capabilities of their services, with the first provisions and restrictions taking effect 2 Feb.
In parallel, regulatory authorities in China have also turned their attention to AI, issuing regulations and technical standards to bring service providers into compliance. The three most important are the Regulations on the Management of Algorithm Recommendations for Internet Information Services, Regulations on Deep Integration Management of Internet Information Services and Interim Measures for the Management of Generative Artificial Intelligence Services, which have been fully applicable since their adoptions in 2022 and 2023.
The evolution of AI governance frameworks in the EU and China share many similarities. For example, the European Commission's High-level Expert Group on AI published its Ethics Guidelines for Trustworthy AI in 2019, around the same time Chinese government bodies first made commitments to supervise AI.
Policymakers in the EU and China have likewise expressed similar goals of regulation, implicating topics such as ethics, data protection, safety and security. Additionally, AI regulation in Brussels and Beijing have been built off preexisting legal frameworks, including those regulating the processing of personal data.
Despite these similarities, however, there are notable differences between the two approaches. Exploring those differences can help organizations comprehend the scope of requirements in both markets and prepare for compliance.
EU opts for comprehensive law, China chooses targeted regulations — for now
The EU AI Act aims to lay down a "uniform legal framework … for the development, the placing on the market, the putting into service and the use of artificial intelligence systems" across the EU. Complimenting other data-related laws — including those emanating from the Commission's digital strategy, such as the Data Act and the Digital Services Act — it will operate as a horizontal regulatory framework that brings into scope a variety of actors, organizations and technologies.
Encapsulating this are a series of core definitions, including one for AI systems as machine-based systems "designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment" and infer "how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" based on the input they receive,and one for general-purpose AI, which captures foundation models that can be developed and integrated into other services.
In practice, this means compliance with the EU AI Act will involve determining whether organizations fall within the law's scope, what obligations are triggered and what needs to be implemented as a result. This indicates an intent to promote maximum harmonization. However, organizations will still need to navigate tricky issues that overlap with other laws, including those that govern personal data processing generally, and other topics like product liability and safety.
By contrast, China has opted for a very different approach. Rather than adopting a single comprehensive law, government bodies have followed a two-pronged strategy: one, draft and implement industry-specific regulations and, two, promulgate technical standards and AI governance pilot projects to build best practice and enforcement experience.
While there is substantial overlap in the approach taken by existing regulations, each turns on the typeof service offered. There is correspondingly no overarching definition of AI, though the term has been defined in certain technical standards.
Moreover, local authorities, such as in Shanghai and Shenzhen have issued their own experimental regulations to test case different regulatory approaches, although these remain relatively light-touch in terms of prescriptive obligations on companies. None of these have so far been adopted at the central level.
Consequently, a key threshold question becomes whether a particular service falls within the scope of the regulation not whether it meets the definition of an AI system. There have been initial attempts to develop a unified AI law in China, which may change this analysis.
Risk classification varies between technological and social contexts
AI governance in both the EU and China revolves around assessing levels of risk — albeit defined differently — and mitigating these through legal restrictions. However, the types of risk categories and the process of classification are notably different and carry implications for where organizations should start with compliance.
A key feature of the AI Act is how it assigns differing levels of compliance requirements depending on the risk category of the service. This classification system operates on a tiered basis, with the strictest restrictions corresponding to the highest level of risk. From a compliance perspective, a significant hurdle will be determining which risk category an AI system falls into and whether an exception may apply.
While the classification framework appears sector and technology agnostic, certain technologies are proactively targeted over others, like biometrics and emotion and facial recognition. In practice, organizations will need time and guidance to unravel the interaction between these different risk levels.
Under the EU AI Act, risk categories are classified as prohibited practices, high-risk AI systems and AI with specific transparency obligations.
Article 5 sets forth eight different types of prohibited AI systems, which all share a common feature: prohibited systems substantially undermine EU values by empowering organizations to manipulate or gain predictive control over human social and psychological behavior in ways deemed harmful and unacceptable to their interests. Notably, there are exceptions and conditions that will prevent AI systems from falling into this category. Member states, likewise, have flexibility in authorizing some uses of prohibited AI systems, such as real-time remote biometric identification systems, but do not have the ability to issue licenses for all technologies listed.
AI systems falling into the high-risk category are permitted but are subject to strict compliance obligations. This category intends to capture technologies that pose significant risk of harm to the health, safety or fundamental rights of an individual but are not so harmful as to be outright prohibited. Under Article 6, AI systems become high risk if they meet the conditions for an Annex 1 high-risk system or if they are expressly referred to in Annex 3. The Commission will clarify the scope of high-risk AI systems through delegated acts and guidelines, which will include a list of practical examples.
Article 50 captures certain AI systems where it is not immediately clear to the user that they are communicating with an AI system, like chatbots, or images, audio, video or other content that have been manipulated by an AI system, like synthetic content or deepfakes. AI systems that generate or manipulate text that is published to inform the public on matters of public interest are also included. Note, this category is not mutually exclusive with high-risk systems.
In contrast, Chinese AI measures build similar compliance requirements off levels of risk but do so in different ways. The AI regulations in force with the strictest obligations target specific services and, within those services, a special category of capabilities carries more restrictions.
There is no overarching framework for AI risk classification that imposes specific legal obligations. When Chinese policy instruments have categorized AI risks generally, they've come in the form of best practices, guidelines and administrative strategy documents, such as the Ministry of Science and Technology's code of practice for AI ethics.
Under the Chinese AI regulations, risk categories are classified as prohibited practices, public opinion attributes and social mobilization capabilities.
China does not have anything approximating a prohibited list of technologies, though in practice some might be due to de facto regulatory expectations. Experts have floated, and continue to float, the idea of creating a negative licensing regime, which would create a list of prohibited AI systems requiring preapproval from authorities. But policymakers have not implemented this in formal regulation. If such an idea were picked up, it is likely Chinese authorities would be able to approve AI systems after an assessment or security filing.
While the Chinese AI regulations in force do not prescribe high-risk categories, they do recognize a special subset of services, those that have "public opinion attributes or social mobilization capabilities," and impose special restrictions on these. Public opinion attributes cover technologies that can provide a platform or channel for the expression of public opinion, while social mobilization capabilities target services that can encourage the public to engage in specific actions.
The concept originates from a 2018 regulation governing news information services, which also provides examples such as forums, blogs, chat rooms, short videos, live webcasts, information sharing services and apps. In the context of AI, these features often overlap and would include technologies like chatbots, virtual assistants and services that use content-recommender algorithms. News information services are subject to a strict licensing regime in China.
The extension of the concept to digital technologies indicates that despite sharing some similarities with the EU AI Act — in terms of a special focus on technologies that manipulate or distort information — administrative context changes how and why risk categorization plays out in practice.
Covered actors reveal differences in how the AI supply chain is conceptualized
Another key difference between Chinese regulations and the EU AI Act concerns who is governed and when. The EU AI Act defines distinct actors, each with corresponding obligations. These actors play a specific role in the AI supply chain — how an AI product is designed and put into market — and can occupy spaces in numerous industries, as they are sector neutral.
While some of this terminology exists in Chinese regulations, there is no one single document describing all actors. Rather, providers of specific services are covered. These services can involve multiple actors, but in practice they are interpreted to cover very specific entities.
Under the EU AI Act, the scope of actors and their key obligations are as providers, deployers, and importers and distributors of AI systems.
Providers are entities that develop AI systems or general-purpose AI models, or have them developed, and subsequently place them on the market or put them into service under their own name or trademark. Depending on their risk classifications, most obligations of the EU AI Act fall on providers. For high-risk systems these include complying with technical and organizational measures, AI quality management, post-market monitoring, corrective actions, system registration and conformity assessments.
Deployers are entities that use an AI system under their authority, excluding personal, nonprofessional uses. These entities are subject to specific deployment, data quality and monitoring obligations. For high-risk systems, this includes ensuring the real-world application of the AI system adheres to the design and operational regulations of the provider, for example input data control, monitoring and incident reporting. Notably, deployers may assume the responsibilities of providers through certain actions.
Importers are entities that are located in the EU and place AI systems carrying the name or trademark of someone established outside the EU on the market. For high-risk systems, they must verify appropriate conformity assessments have been completed, check the provider's technical documentation, and ensure the necessary CE marking/declaration of conformity and instructions for use are in place.
Distributors, on the other hand, are entities that are neither providers nor importers but otherwise make AI systems available. For high-risk systems, they must verify critical elements before marking the AI system available including the CE marking, declaration of conformity and instructions for use.
In contrast, Chinese AI regulations define main actors as providers of algorithm recommendation services, providers and technical supporters of deep synthesis services, and providers of generative AI services.
Providers of algorithm recommendation services, detailed in Articles 6-22, are entities or individuals that use recommendation algorithms to provide internet-based information services, including content. These providers must ensure content complies with laws, mark algorithm-generated content, establish internal management systems, protect users and avoid harmful influences. They must also protect vulnerable groups, as well as conduct security assessments, and file algorithms with the Cyberspace Administration of China, when applicable.
In this context, providers are entities or individuals that provide deep synthesis service. They must ensure compliance with content moderation, transparency and user protection, as well as conduct security assessments and algorithm filings, when applicable. Technical supporters, on the other hand, are entities or individuals that provide technical support for deep synthesis service. They are responsible for protecting personal information, regularly reviewing and verifying synthetic algorithm mechanisms, cooperating with authorities for remediation, and conducting security assessments and algorithm filings. These provisions are in Articles 6–22
Providers of generative AI services are entities or individuals that utilize generative AI technology to provide generative AI services, including providing such services through the provision of a programmable interface or other means. These providers must protect users, ensure content complies with laws, maintain transparency, use lawful datasets, prevent discrimination, protect privacy and conduct security assessments. They are also required to guide users, monitor AI use, mark AI content and report noncompliance to authorities, ensuring responsible and secure AI services. Generative AI measures are in Articles 5-15.
Notably, the EU AI Act's approach to general-purpose AI models is unique. Article 3(63) defines a general-purpose AI model as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications." The inclusion of special rules covering these, including those covering transparency for downstream uses and training data, technical documentation and data security testing, within the structure of the AI Act itself was a major negotiating point during the new law's drafting.
In China, regulators have also focused on general-purpose AI models. However, the level of prescription is far less defined than in the EU. Regulations currently in force may implicate the development of such models, but the bulk of current oversight comes from nonbinding technical standards, which play a different role in enforcement and oversight than in the EU.
A good example concerns the standard TC260-003 Basic Security Requirements for Generative Artificial Intelligence Services, which imposes special requirements around what training data can be used, how it can be used, the level of human oversight expected, and metrics for evaluating and recording the accuracy and quality of output data. These obligations are not explicitly found in AI regulations, although many contain a requirement to comply with standards when applicable.
Both jurisdictions contain unique scope of requirements
Major differences are also visible in the types of compliance obligations that apply to organizations. The bulk of the differences arise from the risk classification of the service and the organization's role in providing that service.
Compliance requirements under the EU AI Act are quite dense and will be subject to further clarification through implementing and delegated acts. For instance, high-risk AI systems must meet specific technical and organizational measures throughout the life cycle of the AI system, including requirements around risk management, including implementing identification, analysis and mitigation strategies; data governance, including meeting standards for data training and model testing; technical documentation and recordkeeping, including demonstrating compliance, traceability and post-market monitoring; transparency, including designing AI systems so outputs can be comprehended; providing instructions for use; accurate and robust cybersecurity, including maintaining accuracy, security and resilience; and human oversight, including keeping humans in the loop depending on the AI system's level of risk, autonomy and context of use.
Notably, requirements in China also touch upon these topics and use similar regulatory language. For instance, a key component of Chinese AI governance is implementing comprehensive risk management systems, recordkeeping and providing instructions for downstream uses of AI products. Additionally, many requirements in Article 50 of the EU AI Act mirror those already found in Chinese AI regulations, such as transparency requirements to watermark AI generated content, label content containing deep fakes and disclose to the public that services use AI.
Despite textual similarities, it remains to be seen exactly how much actual overlap arises in enforcement and implementation. However, there are a few key structural differences worth mentioning.
Conformity assessments. The integration of the EU AI Act's requirements into the conformity assessment procedure under relevant EU product safety laws is a unique feature not currently found in China. While conformity assessments are a routine part of testing and accreditation in China, their role in AI governance is relatively small compared to other forms of regulatory review, such as algorithm filing and security assessments. Note, conformity assessments have been floated as an idea in local regulations such as Shanghai's AI regulation.
Algorithm filing. In China, services that meet particular criteria must file their algorithms with authorities. This filing must include basic information about the filing entity and the algorithm, as well as the product/services associated with the algorithm. Regulators have created a dedicated online portal for this. While there are certain registration requirements for AI systems under the EU AI Act, there is currently no central database being maintained for this purpose.
Security review. Another unique feature in China is the use of security reviews. For algorithms with public opinion attributes and social mobilization capabilities, providers must conduct security assessments. This includes multiple dimensions such as risk prevention, user protection, content moderation, model security and data security. Actors must complete a self-assessment before filing.
While regulators in the EU will conduct investigations and ask to see compliance documentation, there is nothing like a detailed security review required before launch of the service. To be sure, conformity assessments under the EU AI Act are ex ante, but their purpose is aligned with product safety and involves certification from third-party accreditation bodies, not outright licensure from a regulatory authority. Additionally, the EU AI Act's requirement to conduct a fundamental risk assessment under certain conditions diverges in scope and substance from China's self-assessment process, although in practice similar topics will be covered.
Enforcement structure indicates differences in administrative and legal contexts
Under the EU AI Act, enforcement of most provisions — excluding those targeting general-purpose AI — is the responsibility of national competent authorities, which each member state must designate under the law. The national competent authorities will supervise the implementation of the AI Act, including obligations for high-risk AI systems and prohibitions. It is currently unclear which administrative bodies will be designated as competent authorities, but many data protection regulators have already insisted the AI Act be within their competence.
Coordinating enforcement is the AI Office, which will play a supportive role by centralizing oversight across all 27 member states and offering guidance and expertise on key topics. The AI Office, which sits within the European Commission's Directorate-General for Communications Networks, Content and Technology, will assist with joint investigations, streamline communication between national authorities, and create a governance system that includes establishing technical advisory bodies at the EU level and supporting the scientific panel of independent experts.
The office also has oversight to enforce the AI Act's provisions on general-purpose AI; a key signal that enforcement of these models should be uniform across the EU. Finally, the AI Office will assist authorities and other central-level bodies by issuing voluntary codes of conduct and support the drafting of technical standards, guidelines, and implementing and delegated acts.
Similarly, in China, there is a plethora of different administrative bodies involved in AI enforcement. These cut across different ministries and departments with core responsibilities often divided depending on the area of law and industrial sector. These government bodies collaborate on key AI issues, which include drafting and adopting AI regulations and guidance.
For instance, while the CAC remains a main regulator in AI governance, the Ministry of Information and Industry Technology, which supervises the telecommunications, internet of things and mobile apps industries, played a key role in promulgating China's existing AI regulations. Additionally, authorities in China have built off previous expertise to address risks stemming from AI. A good example is how the CAC has taken a lead role in supervising the cybersecurity and data protection aspects of AI, two areas it traditionally oversees for organizations generally.
A key difference between the EU and China is the division of competence between and within administrative units. In practice, this will affect how organizations understand and evaluate their relationships with enforcement bodies. Under the AI Act, member states have responsibilities to designate enforcement authorities and can do so based on their own considerations. This means enforcement bodies may differ depending on the member state. Organizations will need to get used to different bodies being the competent regulator depending on where they offer services.
In China, ministries have local offices across provinces and city-level units, with these bodies being responsible for local oversight. Consequently, organizations will need to get used to all ministries that have competence over their services, regardless of where they operate.
Relatedly, in China, centralized oversight happens through the central-level offices of each ministry or state body that has political power to supervise their respective industries. These offices dictate rules of procedural administration and policy that local offices must implement. This contrast with the EU ,where the central-level bodies play a supporting role and can issue guidance but cannot outright interpret and enforce the AI Act on behalf of competent authorities, save for the specific provisions the AI Office oversees.
Conclusion
Understanding the key differences between the EU and China's approach to AI regulation can help organizations better prepare for compliance. Beyond these key differences in legislative approach, risk-classification frameworks, scope of covered actors, key compliance requirements and enforcement structure, additional concrete lessons for compliance will emerge as AI regulations are implemented in both jurisdictions.
Hunter Dorwart is an associate, Harry Qu, CIPP/E, is a data associate, and Tobias Bräutigam, CIPP/E, CIPM, FIP, and James Gong, CIPP/CN, CIPP/E, CIPP/US, CIPM, are partners at Bird & Bird.