South Korea has become the second country in the world, following the EU, to enact a comprehensive regulatory law on artificial intelligence.
The Framework Act on the Development of Artificial Intelligence and Establishment of Trust Foundation or the AI Framework Act, which integrated 19 bills proposed in the 22nd National Assembly, passed the plenary session 26 Dec. 2024 with overwhelming bipartisan support despite political turmoil involving President Yoon Suk Yeol's declaration of martial law and subsequent impeachment.
The act was promulgated 21 Jan. and will take effect 22 Jan. 2026, following a one-year preparation period. During this period, subordinate legislation and sector-specific guidelines that define the specific types and scope of high-impact AI will be finalized and enacted.
Purpose and definitions
The act aims to protect citizens' rights and dignity, improve their quality of life, and strengthen national competitiveness by regulating fundamental matters necessary for the sound development of AI and establishment of a foundation of trust.
It defines AI as "the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgment, and language understanding" and AI systems as "AI-based systems that infer outputs such as predictions, recommendations, and decisions that affect real and virtual environments for given objectives, with varying levels of autonomy and adaptability."
Scope and regulatory approach
The AI Framework Act includes both promotional provisions to support AI development and regulatory measures to establish a foundation of trust. For promotion, it uses comprehensive concepts like AI, AI systems and AI technology, while regulation primarily targets high-impact and generative AI.
High-impact AI refers to AI systems used in 11 specified areas that may significantly impact or pose risks to human life, physical safety and fundamental rights. Generative AI is defined as "AI systems that generate various outputs such as text, sound, images, and videos by mimicking the structure and characteristics of input data."
The AI operators subject to the law are divided into AI development operators who develop and provide AI and AI utilization operators who provide AI products or services using AI provided by developers.
Governance structure
The Ministry of Science and ICT is the main authority responsible for AI policy implementation, while the National AI Committee, under the president, deliberates and decides on AI policies. The AI Policy Center and AI Safety Institute support policy implementation.
Promotional measures
The AI Framework Act includes various promotional measures, such as support for AI technology development and the safe use and standardization of AI technology, promotion of AI learning data-related policies, support for AI technology adoption and utilization, special support for SMEs, promotion of start-ups, facilitation of AI convergence and utilization, efforts for institutional improvement and support, support for professional workforce development, promotion of international cooperation, establishment of AI clusters, creation of AI demonstration infrastructure, and promotion of AI data center-related policies.
Ethical considerations
The law provides a basis for the government to establish ethical AI principles and allows educational institutions, research institutions and AI businesses to establish private autonomous AI ethics committees to comply with these principles.
Regulatory measures
The act mandates transparency obligations, including prior notification to users when providing products or services using high-impact or generative AI, and labeling requirements for generative AI outputs. AI systems with cumulative computational power above a certain threshold must implement risk identification, assessment and mitigation measures throughout the AI life cycle, as well as establish a risk management system to monitor and respond to AI-related safety incidents.
High-impact AI operators must implement safety and reliability measures as prescribed by presidential decree, including risk management plans, explanation methods for AI outputs and criteria, user protection plans, human oversight of high-impact AI, documentation of safety and reliability measures, and other matters deliberated and decided by the National AI Committee.
When providing products or services using high-impact AI, the act includes an obligation to evaluate potential impacts on individuals' fundamental rights in advance, and national agencies must prioritize the use of products or services that have undergone the fundamental rights impact assessment.
However, considering the uncertainty regarding whether a system qualifies as high-impact AI, it is permitted to request confirmation from the Minister of Science and ICT. Additionally, AI operators without an address or place of business in South Korea but who meet certain criteria regarding user numbers or revenue as defined by presidential decree must designate and report a domestic representative in writing.
To ensure the effectiveness of the law, the act grants investigative authority to the Minister of Science and ICT. If obligations related to the notification of AI-based operations, designation of a domestic representative, or compliance with cessation or correction orders issued by the minister are violated, fines of up to KRW30 million may be imposed.
Comparison of the AI Framework Act and EU AI Act
The AI Framework Act has established its concepts and various obligations by referencing international discussions, including the EU AI Act. The definition of AI systems in South Korea's act is similar to those of the EU and the Organisation for Economic Co-operation and Development. While regulating both promotion and regulation, the AI Framework Act defines its personal scope of application more comprehensively than the EU AI Act.
The AI Framework Act primarily targets high-impact AI, generative AI and high-performance AI systems based on cumulative computational power used for learning as its main regulatory subjects. In contrast, the AI Act focuses on prohibited AI practices, high-risk AI systems and general-purpose AI models, marking a difference in approach.
A significant difference lies in how obligations are structured. The EU AI Act stipulates differentiated obligations based on the types of participants in the AI value chain, whereas South Korea's AI Framework Act comprehensively defines obligations without distinguishing between types.
Like the AI Act, the AI Framework Act stipulates various obligations related to high-impact AI, including FRIAs. However, the specific level of regulation may change according to future subordinate legislation.
The EU has established strong sanctions with various levels of penalties depending on the type of violation, including administrative fines of up to 7% of global annual turnover. In contrast, the AI Framework Act only stipulates administrative fines of up to KRW30 million for three types of violations, indicating a difference in the severity of sanctions.
The three violations subject to fines are related to the obligation to notify of an AI-based operation when providing products or services using high-impact AI or generative AI, the obligation to designate a domestic agent, and the obligation to comply with the Minister of Science and ICT's suspension or corrective order.
Significance and implications
While referencing both the U.S. model focused on private sector autonomy and the EU regulatory model emphasizing safety and reliability, the AI Framework Act ultimately chose a unique path that harmonizes minimal regulation with AI promotion to achieve the goal of becoming a leading country in global AI competitiveness. With the potential for the level of regulation to be further strengthened in the process of enacting subordinate legislation and guidelines, voices calling for appropriate levels of regulation are indeed growing louder.
Although the Ministry of Science and ICT is at the center of South Korea's AI governance, the coordinating function of the National AI Committee has become crucial in harmonizing the expertise of relevant ministries considering the characteristics of regulatory targets or sectors, such as personal information, copyright, health care and defense, as AI spreads and applies across all fields.
Above all, it is significant that South Korea has established a unique legal regulatory framework different from the U.S. and the EU, providing a new legislative example for countries to choose their stance based on their situations or desired directions. On the other hand, as global norms and governance discussions on AI intensify, the need for international interoperability and cooperation in regulations to ensure AI trustworthiness has grown even greater.
Kyoungjin Choi is a professor of law and director of the Center for AI Data and Policy at Gachon University and the president of the Korean Association for Artificial Intelligence and Law.