Pseudonymization as a gateway to AI data use: South Korea's emerging privacy governance model

South Korea's evolving privacy framework pushes data-use boundaries by operationalizing pseudonymization for AI development.

Contributors:
Kyoungsic Min
AIGP, CIPP/E, FIP
Privacy Counsel and Asia Regional Lead
VeraSafe
Across jurisdictions, regulators are exploring different ways to support artificial intelligence development without undermining data protection. South Korea is taking a particularly distinctive path. On 31 March 2026, the Personal Information Protection Commission released its revised Pseudonymized Information Processing Guidelines, signaling an approach that tests how far the boundaries of data protection can be extended while formally preserving its core principles.
Rather than loosening its framework, South Korea is turning pseudonymization into a regulatory gateway — a legal condition that enables certain forms of data use without consent. This shift is not driven by legislation alone. Through recent regulatory guidance and a notable Supreme Court decision, South Korea is shaping a system in which pseudonymization does more than reduce risk: it determines who can use data, under what conditions, and for which purposes.
Pseudonymization as a built-in legal gateway
To understand this shift, it is important to examine how pseudonymization is positioned in South Korean law. The Personal Information Protection Act permits the use of pseudonymized data without consent for purposes such as statistics, scientific research and public interest recordkeeping, and structures data combination around pseudonymization.
Crucially, in South Korea, pseudonymization is not merely a safeguard layered on top of a separate legal basis. It is embedded in the law as a condition that enables secondary use itself.
This marks a key difference from EU General Data Protection Regulation-based practice. In the EU, particularly in AI training contexts, the primary question is whether a lawful basis — such as legitimate interest — can be established, with pseudonymization functioning as a measure that supports that justification. In South Korea, by contrast, pseudonymization operates as a gateway into a legal regime that permits certain types of processing without consent.
The significance of the 2026 Guidelines lies not in creating this structure, but in operationalizing a legal design that already existed, translating it into a framework applicable to real-world data use, including AI development.
PIPC: From legal design to operational framework
The central feature of the 2026 Guidelines is the shift toward a risk-based, contextual approach to pseudonymization. Rather than defining it through fixed technical thresholds, the Guidelines emphasize factors such as processing environments, access controls, intended use and residual re-identification risks.
This reframes pseudonymization from a purely technical state into a governed condition. At the same time, it signals a clear administrative direction: enabling AI development through the structured use of pseudonymized data.
Importantly, the Guidelines explicitly align this framework with the realities of AI development. They clarify that AI development and service improvement may qualify as "scientific research" when they involve hypothesis-setting, data analysis, validation and iterative refinement. They also provide concrete examples, including fraud detection systems, medical imaging analysis, chatbots and intelligent CCTV.
In addition, the Guidelines allow organizations to define expandable purposes for closely related downstream uses of the same dataset, reflecting the iterative and cumulative nature of AI model development.
These developments build on earlier regulatory efforts. In 2024, PIPC clarified that publicly available personal data may be used for AI development under the legitimate interest provision in certain circumstances. In 2025, it issued guidance on generative AI development and deployment, aiming to reduce uncertainty for businesses working with large language models.
The timing of the revision is also telling. While the 2024 Guidelines envisaged a three-year review cycle, pointing to a revision around 2027, the 2026 update arrived earlier than expected — signaling an accelerated and more proactive regulatory response to the demands of AI development.
Taken together, these measures suggest that PIPC is not merely interpreting existing law, but actively shaping how that law operates in practice to enable AI data use.
The Supreme Court: Limiting ex ante resistance
This trajectory is reinforced by judicial interpretation. In July 2025, the South Korean Supreme Court held that pseudonymization does not constitute "processing" for the purpose of a data subject's right to request suspension of processing.
The Court emphasized that pseudonymization is, by nature, a measure designed to reduce identification risks and referred to the legislative purpose of promoting data use in emerging sectors such as AI, cloud computing and the Internet of Things.
The practical effect is clear. By excluding pseudonymization from the scope of this right, the Court narrows one potential avenue for data subjects to block data use at an early stage.
In doing so, the judiciary also contributes to a broader shift: interpreting the pseudonymization framework in a way that reduces friction in data use and supports data-driven innovation.
A new form of privacy governance
This does not mean that safeguards disappear. Pseudonymized data remain regulated, and core obligations still apply.
But in practice, once data are lawfully collected, pseudonymization opens a broad pathway for their reuse in AI development without additional consent — and, following the 2025 Supreme Court decision, one that data subjects have limited practical ability to halt ex ante.
This is where the distinctiveness of the South Korean model becomes clear. Legislative design, regulatory guidance and judicial interpretation are converging toward a common direction: treating pseudonymization not only as a safeguard, but as a mechanism for enabling and structuring lawful data use, with limited scope for data subjects to intervene in practice.
Korea is moving toward a model in which legal frameworks are actively interpreted to enable AI training — testing the limits of data use within privacy law.
Rather than resolving the tension between AI and privacy by weakening one side, Korea is experimenting with how far existing legal structures can be extended to accommodate data-driven innovation. The opportunity is clear. So is the risk. As pseudonymization evolves from a protective measure into a gateway to legality, it begins to reshape the architecture of data protection itself.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Kyoungsic Min
AIGP, CIPP/E, FIP
Privacy Counsel and Asia Regional Lead
VeraSafe



