Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

In a pivotal moment for privacy professionals worldwide monitoring the global spread of AI regulation, South Korea's Ministry of Science and Information and Communication Technology unveiled the draft enforcement decree for its landmark Artificial Intelligence Framework Act on 8 Sept.

Many global audiences expected a framework echoing the risk-based approach of the EU's AI Act. Instead, the draft confirms South Korea is charting a starkly different course, prioritizing industrial promotion so thoroughly that it renders the law's regulatory functions largely symbolic.

This intent is perfectly captured by the government's decision to rename the law's central oversight body from the "National AI Committee" to the "National AI Strategy Committee," shifting the focus from oversight to economic growth. An examination of the decree shows the government employed two primary strategies to neutralize the law's impact: delaying its consequences and minimizing its reach.

The act passed in December 2024 and will take effect 22 Jan. 2026.

Delaying the bite: Postponing real consequences

The Ministry of Science and ICT masterfully postpones any real regulatory consequences, most blatantly through a multi-year "grace period" for all administrative fines. The MSIT's official documents state this measure is intended to achieve an "effect identical to a regulatory moratorium."

While the government has yet to set a definitive timeframe for this grace period — stating the exact duration will be finalized after consulting stakeholders — it has made clear that during this period, companies will face no financial penalties for violations such as failing to notify users they are interacting with AI or for certain overseas firms failing to appoint a domestic representative.

This move single-handedly removes the immediate threat of punishment that gives regulations their teeth. Furthermore, even if fines were eventually imposed, the upper limit is a mere KRW30 million — approximately USD22,500 — a nominal sum that further diminishes the regulation's potential as a serious deterrent.

Shrinking the cage: Minimizing the law's reach

Where regulations could not be delayed, their scope was drastically narrowed to apply to the fewest possible entities, effectively shrinking the regulatory cage.

First, the scope of "high-risk AI" has been effectively frozen. While the parent law does contain a list of systems deemed high-risk, the enforcement decree was given the authority to add to this list as new risks emerge. The MSIT deliberately chose not to. By forgoing this opportunity, the government has signaled a clear reluctance to expand oversight, effectively capping the scope of high-risk regulation to the bare minimum required by law.

This minimalist strategy is complemented by technical standards and liability carve-outs. The benchmark for defining "high-performance AI" is set at 10 to the power 26 floating-point operations per second, a figure 10 times higher than the European standard, ensuring very few models will ever meet the criteria.

The decree creates a convenient distinction between "AI developers" and "AI service providers," stating that if the original developer has met its obligations, the company using the AI does not need to take separate measures. This absolves the vast majority of businesses that use, but do not build, AI systems from significant regulatory responsibility.

Finally, the bar for extraterritorial application was set exceptionally high. The burdensome duty for foreign companies to appoint a domestic representative is limited to only those with an annual revenue over KRW1 trillion, approximately USD750 million, AI-sector revenue over KRW10 billion, or over 1 million domestic users. This shields all but the largest global tech giants from a major compliance hurdle.

The only teeth left

The combined effect of these delays and limitations creates a regulatory framework that is largely an empty shell, carefully engineered to shield corporations from any meaningful burden.

This leaves a critical question: where will South Korea's meaningful AI oversight come from? The answer, paradoxically, lies outside the AI Act itself. With the new law rendered impotent, the only viable path for future AI regulation will be through the established and powerful lens of privacy and data protection law.

South Korea's Personal Information Protection Act, with its robust enforcement agency, remains the only regulation with the teeth necessary to address the challenges of the AI era.

Kyoungsic Min, CIPP/E, AIGP, is privacy counsel and Asia regional lead at VeraSafe, and Seoul KnowledgeNet Chapter Co-Chair.