As artificial intelligence surges forward with groundbreaking models like GPT-3, the rapid evolution of large language and multimodal models, along with the proliferation of AI agents, has raised concerns about the technology's trustworthiness.

In response, governments worldwide are crafting pivotal AI regulations to foster innovation while ensuring the safe and responsible use of this transformative technology.

The EU AI Act has set a precedent as the first comprehensive legal framework for AI, and China is not far behind, actively advancing its legislative agenda with a focus on generative AI services.

The Chinese Academy of Social Sciences has spearheaded the development of an expert draft proposal of the Model AI Law, known as the China Model Law, which offers a scholarly perspective on AI governance and serves as a valuable reference for legislative efforts. The latest iteration, version 2.0, was unveiled 16 April.

The China Model Law comprises six chapters that cover the fundamental principles of AI, strategies for promotion, risk management systems, allocation of primary responsibilities, design of governance mechanisms and legal liabilities. It embodies a Chinese governance approach that encourages technological advancement while maintaining a firm line on safety.

Juxtaposing the China Model Law with the EU's AI legislation highlights the international consensus on AI governance, showcases the unique features of the law, and covers its interoperability and distinctive legislative design with Chinese characteristics.

Risk governance and innovation: A balancing act

The EU AI Act categorizes AI systems based on their risk levels — from prohibited and high-risk applications to general-purpose AI — setting forth specific regulatory requirements for each category.

High-risk AI systems are required to establish a risk-management system, ensure data governance, create technical documentation, maintain records, guarantee transparency, provide human oversight, and ensure accuracy, robustness and cybersecurity.

For GPAI systems that may pose systemic risks — when the computational capacity exceeds 10^25 floating point operations per second — organizations must notify the European Commission, provide technical documentation, share information with downstream developers, respect copyright, summarize training data and appoint a legal representative.

If deemed to have systemic risks, these AI systems must undergo model assessments, evaluate and mitigate systemic risks at the EU level, keep track of and record serious incidents, and provide a high level of cybersecurity protection.

The China Model Law adopts a similar risk-based logic but simplifies the risk grading by establishing a negative list for AI. Activities involving AI on this list require prior permission, while those outside the list only need to fulfill registration obligations.

The model law imposes additional obligations on AI developers and providers within the negative list, such as creating and maintaining technical documents, establishing a quality management system, and ensuring human oversight.

Considering the broad application of foundational models and the potential risks they may pose downstream, the China Model Law explicitly defines the obligations of foundational model developers. These include establishing a safety risk-management system; improving model and data management; adhering to principles of openness, fairness and justice; ensuring investment in safety risk management resources; assisting other developers and providers in fulfilling their duties; establishing external social supervision; and issuing social responsibility reports.

This streamlined approach allows for dynamic risk management, reducing the compliance burden on early-stage technological advancements.

Embracing the open-source ecosystem

Both frameworks espouse the development of open-source AI, albeit with tailored exemptions and obligations. This fostering of an open-source AI ecosystem is seen as critical for industry growth.

The AI Act provides exemptions for open-source AI, generally exempting it from the law unless it constitutes prohibited AI or a GPAI model. If it falls under the latter category, certain obligations such as technical documentation, transparency and the appointment of a legal representative, if applicable, may be waived. However, policies adhering to copyright directives and detailed disclosure of training data use are still required. For GPAI models with systemic risks, no exemptions apply.

The China Model Law aligns with the AI Act in easing or waiving the responsibilities of open-source AI, encouraging transparency and compliance management to foster a healthy open-source ecosystem.

Open-source AI provided for free and with transparency can be exempt from legal liability. If a provider can demonstrate adherence to a national AI compliance governance system and has implemented corresponding safety measures, they may be eligible for reduced or waived legal liability.

Moreover, the China Model Law strongly encourages and promotes open-source AI from an industrial policy perspective, offering substantial support for industry development. It advocates for the creation of an open-source AI foundation, specific tax incentives for open-source AI, and the establishment of open-source development platforms, communities and projects.

It also encourages government and state-owned enterprises at all levels to purchase open-source AI products and services that meet national standards.

Navigating copyright challenges in AI training

A pressing issue in AI development is the tension between copyright protection and the training of foundational models, exemplified by high-profile lawsuits. Both the EU and China seek to address this through specific measures that aim to balance copyright holder interests with AI innovation, suggesting an evolving legal landscape in response to technological advancements.

The AI Act requires providers of GPAI to respect copyright law, protect copyright and related rights, and adhere to the reserved rights expressed by rights holders under Article 4(3) of the EU Copyright Directive.

Providers are required to publish a detailed summary describing the content used to train GPAI models, enabling copyright holders and stakeholders to understand and enforce their rights.

The China Model Law proposes establishing an intellectual property statutory licensing system adapted to AI development, clarifying the protection and revenue distribution mechanisms for AI-generated works, and reestablishing a balance among stakeholders.

It introduces a novel safe harbor rule for AI-generated work infringement, exempting AI providers from liability for intellectual property infringement under certain conditions, such as marking AI-generated works, and establishing a complaint acceptance mechanism, warning systems and violation disposal mechanisms.

Illuminating the AI governance path

Both the AI Act and China Model Law illuminate the path forward for AI governance, underscoring the dual objectives of promoting sustainable AI development while ensuring risk mitigation.

Through nuanced legal frameworks that reflect their unique sociopolitical contexts, they offer a glimpse into the collaborative and diverse efforts necessary to shape the future of AI in a way that benefits all.