Global AI Governance Law and Policy: China

Resource Center / Resource Articles / Global AI Governance Law and Policy: China

Global AI Governance Law and Policy: China

This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in China. The full series can be accessed here.


Published: December 2025


Contributors:


Navigate by Topic

There is no doubt that China has become one of the world’s most important countries in artificial intelligence, driven by advanced innovations, robust investments and a wide range of AI applications.

China first introduced intelligent computing into its National Medium and Long-Term Technology Development Plan in 2006, laying the foundation for setting AI as a transformative technology. In 2015, the State Council of China released the national strategy of Internet Plus, identifying AI as one of the country’s strategic emerging industries. The strategy also set the goal of establishing China as a major hub for AI innovation by 2030. Following that national strategy, a comprehensive AI ecosystem has emerged. Major Chinese technology and internet companies have rapidly launched AI products and services across diverse fields.

On 27 Aug. 2025, the State Council of China issued the AI Plus Action Plan, which is widely regarded as the blueprint for the country's national AI strategy in the coming years. According to this plan, China will prioritize the use and deployment of AI in six areas: science and technology development, industrial utilisation, consumer services, public welfare, governance and security, and international collaborations. The country aims to achieve 70% AI penetration in key sectors by 2027 and 90% by 2030, with a vision of building a fully AI-powered economy and society by 2035.

Since 2021, China has introduced a series of detailed AI policies and regulations, reflecting a maturing environment that balances innovations with governance and data security. These frameworks include important AI regulations, industry standards, technical guidelines and court rulings that cover algorithms, deepfakes, generative AI, privacy, intellectual property protection, AI ethics and content labelling.


Approach to regulation

China has deliberately chosen an agile and adaptive approach to regulation, aiming to strike a balance between promoting and developing AI technologies and addressing and managing risks.

Rather than having a single and comprehensive AI law, lawmakers have initially chosen to focus on specific areas to establish the regulatory scheme. This targeted approach prioritizes high-risk or high-potential areas, such as generative AI, deep synthesis, algorithms, and AI labelling, establishing sectoral regulatory frameworks.

Although regulators across industries see AI as a key enabler for growth, the regulators in some industries have progressed faster in defining rules and governance structures. More sophisticated regulatory regimes have emerged in the financial, e-commerce, transportation, education, pharmaceutical and medical sectors.

Algorithms

The Administrative Provisions on Algorithm Recommendation for Internet Information Services, effective on 1 March 2022, marked China’s earliest moves into AI regulation. These provisions apply to internet information service providers using algorithmic recommendation technologies for news feeds, blogs, short videos, chat rooms, online streaming, search results and other online services.

According to the algorithm recommendation provisions, service providers must disclose that algorithms are in use and allow users to opt out of algorithmic recommendation. The provisions underscore the essential requirements to ensure fairness and transparency; providers are prohibited from offering different prices or discriminating against users based on their personal characteristics in the context of algorithmic recommendation.

The algorithm recommendation provisions mandate service providers with public opinion attributes or social mobilization capabilities conduct risk assessments and file the algorithm with the Cyberspace Administration of China.

Failure to abide by the compliance requirements will lead to penalties, including investigations by regulators, administrative fines imposed on both the company and individuals in charge, business suspension, and, in the worst-case scenario, criminal liability.

As of October 2025, China has approved thousands of algorithm filings, reflecting a highly dynamic landscape of AI development and applications.

Deep synthesis technology

In November 2022, China released the Administrative Provisions on Deep Synthesis of Internet-based Information Services, effective beginning in January 2023. The goal was to better govern the development and adoption of deep synthesis technologies, delineating the requirements and prohibitions of deep synthesis services.

The deep synthesis provisions apply to the use of algorithms to synthetically generate or alter video, voice, text, image and other online content. Service providers are prohibited from using deep synthesis technology to produce or disseminate illegal information. They must establish the required mechanisms for user registration, algorithm review, ethics review, content monitoring, data security, personal data protection, fraud prevention, and emergency response.

Among others, these provisions mandate the appropriate labels be added to the content generated by deep synthesis technology. This requirement has been further substantiated in the AI Labelling Measures, discussed below, effective 1 Nov. 2025.

Generative AI

On 10 July 2023, the Interim Measures for Administration of Generative AI Services were issued and took effect on 15 Aug. 2023, making China the first country in the world with binding regulations for generative AI.

The generative AI measures broadly define the term to include models and technologies that can generate text, pictures, sounds, videos, codes and other content based on algorithms, models and rules. To strike a proper balance between encouraging AI innovations and addressing their security risks, the measures exclude research, development and the internal use of generative AI technologies from the compliance requirements.

However, service providers offering public-facing generative AI services are required to shoulder multiple compliance requirements, including legality and legitimacy of training data, monitoring content, upholding ethical and core social values, obtaining consent from individuals for use of personal data, protecting IP rights, maintaining transparency and accountability, preventing discrimination, and safeguarding cybersecurity and data privacy. International companies are expressly permitted to establish foreign-invested enterprises to develop and offer generative AI services in China, provided that such activities are allowed under China’s foreign investment laws.

Similar to those providing algorithmic recommendation services, service providers offering generative AI services with public opinion attributes or social mobilization capabilities to their external customers must conduct security assessments and file their large language models with CAC. The mandatory filing for LLMs is required in addition to the algorithm filing under the Algorithm Recommendation Provisions.

Ethical review measures

Ethical considerations have always posed one of the central challenges in AI development for both businesses and regulators. On 8 Oct. 2023, the Ministry of Science and Technology, Ministry of Industry and Information Technology and other national governmental authorities jointly issued the Interim Measures for Ethics Review Measures, effective 1 Dec. 2023, requiring the ethical review of AI and other research and development activities in biological and medical fields.

To specifically address AI ethical complications, on 22 Aug. 2025, these governmental agencies jointly released the draft Measures for the Administration of Ethics for AI Technological Activities for public consultation.

The draft ethics measures apply to all R&D activities in China that may affect health and safety, reputation, the environment, public order and sustainability. Developers and service providers must adhere to the principles of fairness, accountability, justice, risk responsibility and respect for life and human dignity. AI projects falling within the application scope of these measures must undergo ethics review, either internally by ethics committees or externally through qualified external centers. They cannot proceed with the related AI services before the ethics review is complete.

Regulators are also preparing a detailed list of high-risk AI activities. Businesses are advised to keep a close watch on further developments.

AI labelling

The most recent legislative piece in China’s AI governance bucket is the Labelling Measures for AI Generated Content, which took effect on 1 Sept. 2025. On the same date, the mandatory technical standard on AI content labelling, GB45438-2025, also became effective. Both the AI labelling measures and technical standards provide much-needed clarity and best practices for businesses to refer to when handling AI-generated content.

The labelling measures have a wide application to internet service providers that use AI to generate text, audio, video, images, virtual scenes and other content. Visible labels with AI symbols are required for chatbots, AI-written content, synthetic voices, face generation/swap and immersive scene creation or editing. Explicit labels must remain embedded with the file where AI-generated content can be downloaded, reproduced, or exported.

Other AI-generated content can use implicit labels, such as watermarks or other symbols, that are added to the data files via technical measures but are not easily perceived by users. When an implicit label is added to the metadata of the AI-generated content, the label should include key information such as content attributes, name or identifier of the service provider, and the content reference number.

Internet platforms must act as watchdogs. If they detect or suspect AI-generated content, they must alert users and may add implicit labels themselves. Non-compliance carries serious consequences, including regulatory investigations, fines, business suspensions and revocation of business permits. In severe cases, criminal liability under the Cybersecurity Law, Data Security Law and Personal Information Protection Law may be triggered.

Agentic AI

While the government has not addressed agentic AI in any specific binding law, the broader regulatory instruments that govern things like recommendation algorithms, automatic decision-making and generative AI are applicable to the development of agentic AI as well. This means that companies providing agentic AI products and services are required to conduct impact assessments, follow ethics rules, maintain content monitoring and exercise proper human oversight.

If the agentic AI products and services have public opinion attributes or social mobilization capabilities, a regulatory filing is required. Furthermore, standards and other soft-law mechanisms have been released. These include the "Technical Application Requirements for Intelligent Agents in Software Engineering – Part 1: Development of Intelligent Agents," which emphasizes technical and service capabilities of agentic models. Likewise, a recommendation purportedly led by the Chinese Academy of Information and Communications Technology, titled "ITU-T F.748.46 Requirements and Evaluation Methods of Artificial Intelligence Agents Based on Large Scale Pre-Trained Model," sets out standards for evaluating the performance of AI agents.


Wider environment and recent development

China has a robust privacy and cybersecurity legal framework that applies to AI use cases. Furthermore, China’s judiciary is grappling with the issue of how copyright law applies to AI-generated works.

Privacy and cybersecurity

China’s legal regime on data privacy and cybersecurity is built on the cornerstone of three national laws: CSL, DSL and PIPL. As China has not yet enacted a unified AI law, these statutes apply to AI activities. This means that, where applicable, AI developers and service providers must abide by the compliance requirements imposed by these three national laws, including without limitation, obtaining consent from data subjects before using personal data as training data, following the legal mechanisms for cross-border data transfer, and conducting impact assessments for using AI in decision-making processes, among other things.

It is important to note that on 28 Oct. 2025, China’s top legislature passed major amendments to the CSL. The CSL amendments add new provisions on AI, bringing AI into China’s national law for the first time. The amendments make it clear that China will support the R&D of algorithms; promote the construction of training data resources, computing power, and other AI infrastructures; and expedite rulemaking for AI ethics while firming up AI risk assessment and security governance.

The new CSL amendments will take effect on 1 Jan. 2026. Chinese regulators are anticipated to issue further detailed rules for implementation of these new amendments.

From a technical perspective, China’s National Network Security Standardization Technical Committee issued the AI Governance Framework on 9 Sept. 2025, outlining principles and guidelines for governance and risk management of AI technologies. The AI Governance Framework classifies AI risks into inherent risks and application risks. These include concerns related to LLMs and algorithms, ethical issues, bias and discrimination, contamination of training datasets, data breaches and IT vulnerabilities, criminal and illegal uses of AI, and risks within the supply chain.

The AI Governance Framework recommends adopting organizational and technical measures to address these risks. Suggested steps include ensuring transparency of AI algorithms; protecting IP rights, personal data, and privacy; enhancing AI supply chain security; implementing cybersecurity controls; classifying and grading data and prompts; ensuring traceability of AI applications; filtering and verifying AI outputs to avoid discrimination; and promoting talent development.

AI and copyright

China's courts have been the front-runners to explore how the traditional framework of copyright law applies to works generated with the assistance of AI tools. The courts are never shy about addressing critical questions, such as when AI-generated content can qualify as a “work” under the Copyright Law of the People's Republic of China and who owns the rights. In the past two years, the courts have given some ground-breaking rulings.

One landmark ruling was decided by the Beijing Internet Court in November 2023. In that case, the plaintiff used Stable Diffusion, an AI tool, to generate images from text. The court found that the plaintiff had invested meaningful human creativity by selecting prompts, adjusting parameters and selecting the final image. All these efforts, in the court’s opinion, met the originality requirement under the law, and thus the AI-generated image qualified as a copyrightable work.

Similarly, in March 2025, the Changshu Court in Jiangsu province ruled in favour of copyright protection for an image generated by Midjourney and subsequently edited via Photoshop. The court ruled that the user had engaged in prompt selection and editing, resulting in sufficient originality.

However, some courts have taken a stricter line. The judges of the Zhangjiagang Court in Jiangsu province dismissed a claim for copyright protection on works generated with AI because the human author could not provide substantial evidence of creative input. The user’s reliance on prompts alone, without meaningful arrangement or editing, failed the originality threshold.

These judicial developments show a remarkable trend that China's jurisprudence is adopting a balanced judicial stance along with lawmakers and regulators. On one hand, courts will grant copyright protection when human creativity is identifiable; on the other hand, the courts will closely scrutinize the human elements. If the human contribution is minimal or the output is primarily machine-driven, the courts will deny the right.

Enforcement

Regulators have stayed active in the enforcement of AI regulations. There have been multiple rounds of enforcement campaigns jointly conducted by CAC, MIIT and other governmental agencies. These efforts primarily target non-compliant activities such as failure to conduct the mandatory LLM and algorithm filings, dissemination of misinformation, violations of the PIPL when providing AI services, and insufficient organizational and technical measures to protect against cybersecurity incidents or data breaches.

With multiple new laws and regulations related to AI taking effect now or in the near future, stronger enforcement and penalties are expected in the coming months. It is crucial that businesses analyze the impact of AI regulations and carefully design and review their strategies for China's market. Prompt action to ensure compliance is equally critical.


Full series overview

Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.

Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.

Global AI Governance Law and Policy

Jurisdiction Overviews 2025

The overview page for this series can be accessed here.


Additional resources

  • expand_more