Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

April is a very special month for privacy professionals, as the privacy community is gearing up for the IAPP annual signature event, the Global Privacy Summit 2025 in Washington, D.C. I'm really looking forward to catching up with old privacy friends and colleagues and making new acquaintances.

Here in Beijing, with the warm spring breeze, there have been some positive regulatory developments. Companies are pleased to see a continued relaxation of cross-border data transfer restrictions.

On 9 April, the Cyberspace Administration of China, the country's top data regulator, posted a list of Q&As on its official website, addressing common concerns raised by multinational corporations regarding their business operations in China. The Q&As further clarify that general data, which doesn't fall within the scope of personal data or important data, can be freely transferred out of China. However, important data and personal data exceeding the prescribed thresholds are subject to security assessment, standard contractual clauses, or certification, depending on the specific transfer scenario.

Defining and identifying important data has been a practical challenge for many companies, mainly due to the lack of consistent and detailed implementation guidelines. The Q&As attempt to offer some guidance by referring companies to Article 62 of the Network Data Security Management Regulations — effective 1 Jan. 2025 — and Annex G of the industry standard GB/T43697-2024. Under the Cybersecurity Law, Data Security Law, and Personal Information Protection Law, important data must pass a security assessment by the CAC before it can be transferred overseas.

The Q&As revealed that by March 2025, the CAC had completed reviews of 44 projects involving the export of 509 categories of important data. Among these, the failure rate was 15.9% and the pass rate was 63.9%.

For those familiar with the National Information Security Standardization Technical Committee's standards, the sheer number of national standards and guidelines in the data protection and cybersecurity fields is truly astonishing. On 9 April, TC260 issued six new national standards. These cover security requirements for operations and maintenance products, data security evaluation institutions, automatic decision-making using personal data, government data processing, organizational requirements for personal data protection by large internet enterprises, and network equipment security requirements for programmable logic controllers. The new TC260 standards will take effect 1 Oct.

The Cybersecurity Law, one of China's three cornerstone data and cyber laws, is set to undergo new changes. On 28 March, the CAC released a revised draft of the CSL, inviting public comments until 27 April. The proposed amendments aim to increase compliance requirements for Critical Information Infrastructures operators and strengthen legal consequences for noncompliance. This is to bring the penalty level in line with that of the PIPL and other recent cybersecurity regulations, such as the Network Data Security Management Regulations.

Although the CSL consultation draft raises the amount of administrative fines for violators, it also adds some new provisions, allowing for the waiver or reduction of legal liabilities in cases of first-time or minor violations that have been promptly rectified. This shows the CAC is taking a more balanced approach of encouraging compliance and combining leniency with severity.

Regarding AI governance, China's new AI Labeling Measures will come into effect 1 Sept. Under these new measures, AI service providers are required to include appropriate explicit and implicit AI labels in AI-generated text, graphics, audio, video, or other online content. Noncompliance with these labeling requirements can lead to penalties and other enforcement actions, such as fines, unannounced inspections, investigations, business suspensions, permit revocations and damage to reputation.

Turning to Hong Kong, the Privacy Commissioner's Office for Personal Data published the Checklist on Guidelines for the Use of Generative AI by Employees 31 March. The checklist emphasizes the importance of protecting personal data privacy when using AI technologies and requires the lawful and ethical use of generative AI to prevent bias.

The PCPD also offers practical tips in the checklist. For example, it suggests identifying the permissible scope for employees to use AI, with examples like drafting documents, summarizing information, and creating video and audio content. It also encourages companies to clearly distinguish between work and personal devices and specifies the penalties for employees who don't follow the guidelines.

Barbara Li, CIPP/E, is a partner at Reed Smith.

This article originally appeared in the Asia-Pacific Dashboard Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.