Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
South Korea's Personal Information Protection Commission has forged a robust privacy regime that reflects distinct domestic priorities while increasingly shaping the global debate.
A string of headline investigations — from Kakao Pay, Apple, Alipay to Meta, AliExpress and the Chinese AI start-up DeepSeek — shows the regulator marrying meticulous guidance with muscular enforcement. Most strikingly, it has ordered the destruction of an artificial intelligence model trained on unlawfully obtained data, an approach that puts South Korea at the leading edge of global privacy practice.
A new playbook in three acts
Model deletion becomes real. The 2021 Scatter Lab case confirmed the Personal Information Protection Act's reach into AI training data, but the January 2025 Kakao Pay decision brought teeth. After finding the wallet provider sent 40 million users' data to Alipay, which in turn built "NSF scores" for Apple Pay without notice or consent, the PIPC not only levied KRW8.3 billion in fines — it ordered Alipay to erase the algorithm itself.
Why does that matter? Because model deletion eliminates the very asset an AI developer hopes to monetize. While the U.S. Federal Trade Commission has occasionally required "algorithm disgorgement," European data-protection authorities have so far focused on underlying data. South Korea's precedent shows regulators can, and will, go after the crown jewels when a dataset is poisoned at the source.
Cross-border transfers draw the quickest blood. ThePIPA generally demands granular user notice and consent, or equivalent safeguards, before sending personal data overseas. In July 2024, the PIPC fined AliExpress KRW1.97 billion for routing South Korean shoppers' details to third-country sellers without disclosure, then ordered a top-to-bottom rewrite of its privacy policy and account-deletion flow.
The Kakao Pay case raised the stakes. The regulator labeled the China-bound transfer outright illegal and used that breach to justify model deletion. The takeaway for multinationals is unmistakable — sloppy transfer governance can cost both cash and code.
A live probe underscores the point. DeepSeek, a ChatGPT-style app developed in China, voluntarily withdrew from South Korean app stores in February 2025 after investigators detected silent API calls to ByteDance servers. In April 2025, the PIPC issued a corrective order requiring DeepSeek to halt unlawful cross-border transfers, delete previously exported data, publish a Korean-language privacy policy, designate a domestic representative and undergo follow-up compliance audits.
Consent remains king — especially for behavioral ads. Record 2022 fines against Google and Meta for opaque third-party tracking did not close the chapter. In November 2024, Meta absorbed another KRW21.6 billion penalty after the PIPC found it inferred religious and political views from on-platform activity to power its "ad topics" engine — again, without separate explicit consent.
The message is clear: bundled or pre-ticked permissions will not satisfy South Korean standards, particularly when sensitive data or off-platform signals are involved.
Practical implications for privacy programs
Taken together, these cases paint a regulator intent on rapid, precedent-setting action. Organizations that build or deploy AI in South Korea — or merely touch South Korean personal data — should internalize three lessons.
Prove your data pedigree. The era of "collect now, figure it out later" is over. Maintain auditable records that tie each input dataset to a lawful basis and geographic transfer mechanism.
Treat voluntary guidance as mandatory. The PIPC's AI Privacy Risk Management Model and six-part AI guidance series are officially non-binding, yet enforcement orders routinely reward documented alignment. Embedding those checklists early is cheaper than retrofitting them under subpoena.
Engineer for reversibility. Whether the threat is data-subject deletion requests, a transfer suspension or a full model-erasure order, systems should include kill switches that let engineers trace and retract specific training rows — or, in extreme cases, consign an entire model to the shredder. Boards increasingly ask for proof this is possible; regulators already do.
What's next?
South Korea hosts the Global Privacy Assembly in Seoul in October, just as its AI Basic Act and the EU AI Act edge toward implementation. Expect the PIPC to showcase its model-deletion precedent and court allies for coordinated cross-border probes.
For privacy professionals, the upshot is opportunity as much as risk. South Korea offers a live test bed for governance techniques — pre-launch adequacy reviews, sandboxing, rich transparency dashboards — that could influence standards worldwide. Those who adapt first will not only avoid fines, they'll help write the next chapter of global AI and privacy law.
Kyoungsic Min, AIGP, CIPP/E, is privacy counsel and Asia regional lead at VeraSafe. He is a dual-qualified lawyer in Korea and Washington, D.C., specializing in privacy, data protection and AI governance.