Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

It was wonderful seeing so many familiar faces at the IAPP Global Privacy Summit 2025 in Washington, D.C., three weeks ago. I'm already looking forward to IAPP Asia 2025: Privacy Forum and AI Governance Global in Singapore this July.

In China, artificial intelligence governance is absolutely in the spotlight. The Cyberspace Administration of China launched a three-month special campaign starting 1 May to regulate AI technology abuse.

In the first phase, the CAC will zero in on six key issues: failing to make the required algorithm or large language model filing with the regulator; selling noncompliant AI products or services; using illegal, unauthorized or untrue data as training datasets; neglecting to review and monitor AI-generated content; not implementing AI labelling requirements; and failing to address security risks in key industries such as the misleading aspects of AI-generated medical prescriptions.

In the second phase, the focus will shift to issues including creating and spreading rumors, false or pornographic information using AI, impersonating others to carry out illegal acts, engaging in online bullying and trolling via AI, and harming minors' interests by creating rumors, pornographic material, or inducing addiction in minors with AI.

While ramping up AI governance, China is also actively educating and encouraging the younger generation to use AI properly. A few days ago, China's Ministry of Education issued two AI guidelines for primary and middle school students. In primary schools, AI education emphasizes experience and interest-building. Students are expected to learn basic knowledge about how simple AI tools work, start developing basic logical thinking, and build cybersecurity and privacy awareness. In high schools, the focus is on helping students understand AI's technical logic, deepen their ethical understanding and explore using AI to solve real world problems. AI will serve as a useful tool for teachers to boost the quality and efficiency of classroom teaching and offer personalized education.

In Hong Kong, with AI becoming more prevalent and bringing privacy and cybersecurity risks, the Office of the Privacy Commissioner for Personal Data is actively promoting AI compliance and best practices. The PCPD completed compliance checks on 60 organizations regarding their AI use, finding 80% of these organizations used AI in daily operations, a 5% increase from 2024. Around 54% of AI-using organizations employed three or more AI systems, mainly in customer service. Half of these organizations collected and used personal data via AI systems, and most provided personal data collection statements, stored data properly, and implemented security measures like encryption and access control.

The PCPD stressed that AI comes with risks, so organizations are encouraged to adopt best practices such as formulating AI strategies and governance structures, conducting comprehensive risk assessments, regularly auditing AI systems, developing internal policies or guidelines on the use of generative AI by employees, complying with Hong Kong's privacy ordinance for personal data protection, and communicating with stakeholders.

AI governance will surely stay a hot topic in the APAC region. I'll keep you updated on new developments.

Barbara Li, CIPP/E, is a partner at Reed Smith.

This article originally appeared in the Asia-Pacific Dashboard Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.