Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Artificial intelligence has garnered much talk in 2025 and Hong Kong's Privacy Commissioner for Personal Data has made its opinion on the topic clear.
The PCPD's 2024-25 Annual Report, "Leveraging Artificial Intelligence for a New Digital Privacy Era," concentrates on both the accomplishments and impacts of AI, and offers a clear message: AI, data security and digital trust have all changed how Hong Kong keeps people's private information safe.
"We published the city's inaugural model personal data protection framework for AI to steer Hong Kong's industries and organisations towards the responsible protection of personal data in using AI, while spearheading efforts to strike a delicate balance between innovation and security in AI among global privacy or data protection regulators," Privacy Commissioner Ada Chung states in the report.
Putting AI ethics into practice
In June 2024, the PCPD created the AI Model Framework on Personal Data Protection, one of the first explicit regional frameworks for regulating AI. The framework states AI goes through six stages in its life cycle: collection, use, training, security, transparency and access, and correction — with safeguards in place at each point.
For example: businesses need to make sure their data sets don't have any scraped or unauthorized personal information when they are training and keep track of where the data came from; before implementation, a study on the privacy effect must be done to make sure algorithms can't figure out who people are; when systems are running, organizations need to be clear about the purpose, the data sources, and the reasons for automated selections; and if a model is no longer in use, the data must be deleted or made anonymous.
The message is clear: It should be easy to examine and measure how AI is run.
In its AI Model Framework on Personal Data Protection and its Checklist on Guidelines for the Use of Generative AI by Employees the PCPD makes it apparent that using real customer conversation data to train its generative AI system without proper authorization and outside the original purpose would likely contravene the PDPO's idea of purpose limitation.
In early 2025, the PCPD went even farther and produced a list of steps employers should take when using generative AI. It tells businesses to hire an internal reviewer, usually a privacy or legal officer, to authorize employee use of tools like ChatGPT or Microsoft Copilot. The PCPD has also taken a leadership role internationally, serving as co-chair of the Global Privacy Assembly's AI Ethics and Data Protection Working Group since October 2024.
The PCPD and Hong Kong Productivity Council's Hong Kong Enterprise Cyber Security Readiness Index and AI Security survey found approximately 70% of companies worry that AI threatens privacy, but less than 30% have official mechanisms to confirm this. The study found companies' top fears include model errors that let data leak, third-party software that hasn't been tested and is used in internal systems, and prejudiced or unjust AI outputs.
The PCPD recommends three steps to close the gap: Make an AI accountability matrix to show who is responsible for what; include AI projects in yearly privacy audits; and only conduct business with vendors who are certified under the new international AI management system standard, ISO/IEC 42001.
A regulator that works and teaches
The stats are shocking on their own. The PCPD received over 3,400 complaints from 2024 to 2025. It started 134 inquiries and 88 criminal probes, which led to 21 arrests. While there were more than 800 doxxing incidents two years ago, the 2024-25 report cited just 65 cases. The modification to Hong Kong's Personal Data (Privacy) Ordinance in 2021 shows it might be working to curb noncompliant behavior.
The biggest difference, however, is in culture. The PCPD was once a quiet enforcer of the law, but is now an active teacher and advocate for the public. Over the previous year, it ran hundreds of social media campaigns and spoke to about 60,000 individuals.
It seems keeping people's privacy safe in Hong Kong is becoming less of a legal requirement and more of a shared value.
From responding to stopping data breaches
The PCPD saw 207 data breaches last year. While this is a lot, the PCPD is now more focused on stopping than prosecuting breaches. The "Data Security Scanner," part of the PCPD's Data Security Package launched in October 2024, is a quick way for companies and organizations to assess their data security.
Most security issues are caused by outdated systems or flaws in how they are established, not by individuals who wish to do damage. The PCPD plans to focus on building capacity instead of punishing people. This can help small and medium-sized enterprises see privacy as a good thing.
Safer, easier cross-border transactions
The Greater Bay Area Standard Contract for Cross-boundary Flow of Personal Information was made available to all industries in November 2024. This was a major step in the right direction.
Hong Kong businesses can send personal information to mainland China without having to go through rigorous security procedures if they sign a standard contract and register it with the authorities.
It's easy to see what the duties are. Exporters must complete risk assessments, importers can only use data for its intended purpose, and both sides are required to delete or anonymize information when the contract ends.
The PCPD argues the new framework can cut the time it takes to comply from 90 days to less than 30 days, and save up to 40% on legal expenditures. It makes Hong Kong a site where China and the rest of the world may share information, especially about finance, health care and cloud services.
Using old laws with new technology
The PCPD's work to enforce the PDPO is also more open now. It made more than 60 public statements, answered more than 200 media questions, and kept track of thousands of news pieces about privacy in 2024.
There were two types of cases that year. First, several companies were investigated for putting up employment adverts without revealing their names, which went against laws about being honest about how data is obtained. Second, a blockchain company was fined for not letting users know how their private information was being used. This was the first privacy cases in Hong Kong dealing with decentralized finance.
These examples show how the regulator applies both strict and flexible privacy laws to emerging technologies.
Raising awareness of privacy in daily life
One of the PCPD's main goals is to educate the public. The office developed short videos to highlight how realistic face-swapping technology has become to help individuals identify deepfakes and scams on the internet. Volunteers visited senior centers and neighborhoods to share "anti-fraud packs" and explain six simple methods to stay safe.
Through these initiatives, protecting privacy is becoming something individuals do every day rather than something that is up for debate.
Looking ahead: Earning trust by following the rules
By focusing on teamwork, giving small businesses more power, and exchanging data across the Greater Bay Area, Hong Kong is becoming a living lab for privacy innovation in the Asia-Pacific region.
As Hong Kong welcomes a new era of digital privacy, businesses, regulators and citizens can turn compliance into trust and make trust the most crucial component.
Sylvia Zhang, CIPP/A, CIPP/E, CIPP/US, CIPM, FIP, is APAC privacy lead counsel at Align Technology.
