Resource Center / Resource Articles / Global AI Governance Law and Policy: South Korea
Global AI Governance Law and Policy: South Korea
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in South Korea. The full series can be accessed here.
Published: August 2025
Contributors:
Navigate by Topic
South Korea is a global powerhouse in several industries, including IT, semiconductors and batteries, enabling the country to emerge as a key player in artificial intelligence. The adoption of South Korea’s Act on the Development of Artificial Intelligence and Establishment of Trust has been a watershed moment in the development of the nation’s artificial intelligence policy. The AI Basic Act is scheduled to come into effect on 22 January 2026 and will be the world’s second comprehensive AI law after the EU AI Act.
Many of the technical details of the obligations under the act is delegated to the enforcement decrees, which are currently being prepared by the Ministry of Science and Information and Communication Technology. There have also been other regulatory developments relating to AI, including the Personal Information Protection Act, Copyright Act, and Monopoly Regulation and Fair Trade Act.
The Lee Jae Myung presidential administration, which came into power on 4 June 2025, views AI as a key driver of South Korea’s economic growth. President Lee announced his vision to position South Korea among the world’s top three players by creating an AI-related industrial innovation ecosystem, building the world’s most advanced AI infrastructure, introducing legislation and governance systems, and developing AI talent. Many experts anticipate South Korea's AI industry to undergo significant transformations, fueled by such technological growth and recent regulatory development.
The South Korean government has pursued policies to help promote the AI industry by recognizing the autonomy of the private sector and avoiding onerous regulations. At the forefront of the government’s efforts is the AI Basic Act, which is aimed at “protect[ing] the rights and dignity of the people, improve[ing] their quality of life and strengthen[ing] national competitiveness by stipulating the basic matters necessary for the safe development of artificial intelligence and the establishment of trust.”
Regulation under the AI Basic Act
Under the AI Basic Act, AI is defined as an electronic implementation of human intellectual abilities, such as learning, reasoning, perception, decision making and language comprehension. An AI system is defined as a system powered by AI capable of making predictions, suggestions and decisions that affect real and virtual environments for a given goal with various levels of autonomy and adaptability. The AI Basic Act defines AI business operators as corporations, organizations, individuals and government bodies conducting AI-related businesses; business operators fall into one of two categories: AI development business operators who develop and offer AI or AI utilization business operators who provide AI products or services powered by AI developed by the former. While not an exact match, an AI development business operator would constitute a provider under the EU AI Act, whereas an AI utilization business operator would be considered a deployer under the same framework. The AI Basic Act’s obligations currently equally apply to both types of AI business operators, although it remains to be seen if forthcoming enforcement decrees or associated guidelines will specify any differentiated obligations for each type of business operator.
The AI Basic Act adopts a risk-based and comprehensive regulatory framework. The act imposes different obligations on AI business operators depending on the type of AI being provided. For example, operators of high-impact AI — defined as systems that significantly affect or poses risks to human life, physical safety, or fundamental rights and are used in critical domains such as nuclear power, energy, traffic control, recruitment, and loan evaluation — are required to:
- Assess whether their AI qualifies as “high-impact AI” before deployment. The operator may optionally confirm the assessment through the Minister of the MSIT.
- Inform the users in advance that their products are powered by high-impact AI.
- Implement a comprehensive framework of safety and reliability measures for their high-impact AI.
Additionally, AI business operators are encouraged to conduct impact assessments to evaluate the potential effects of their high-impact AI on people’s fundamental rights.
Generative and high-performance AI
The AI Basic Act does not mention foundation or general-purpose models. Instead, it sets out obligations for generative AI and high-performance AI—i.e., AI systems that either surpass a predetermined threshold of cumulative compute used during training.
The AI Basic Act defines generative AI as an AI system that generates text, sound, images, videos or other outputs by mimicking the structure and features of the input data. Operators are required to notify users in advance that their products are being powered by generative AI. Operators must also clearly label products or services that have been created by generative AI. When virtual outputs may be mistaken as real — often referred to as deepfakes — operators are required to provide clear notifications or labels indicating the potential for misinterpretation.
Operators of high-performance AI systems are required to identify, assess and mitigate risks throughout the AI’s lifecycle. Operators must also establish a risk management system for monitoring and responding to AI-related safety incidents.
Agentic AI
There are currently no government regulations or policies specifically targeting agentic AI. However, as its use becomes more prevalent, the question of whether it qualifies as “high-impact AI” under the AI Basic Act is expected to be addressed. There have also been discussions about the need for trust-based governance and multi-layered safeguards for agentic AI.
Enforcement
The minister of the MSIT may launch a fact-finding investigation if it receives a report/complaint or otherwise suspects that any of the following requirements under the AI Basic Act have been violated: compliance with the safety and reliability standards for high-impact AI; labeling requirements for generative AI outputs, as well as notification or labeling requirements for deepfake outputs; and implementation of safety measures and reporting of compliance results for high-performance AI.
Upon confirmation of a violation, the minister may issue an order directing the offending party to suspend or correct the non-compliant action. Failure to comply with an MSIT order can result in fines of up to KRW30 million, or approximately USD21,700.
Data protection
South Korea’s data protection authority, the Personal Information Protection Commission is spearheading policy reform to ensure the Personal Information Protection Act is aligned with the realities of the AI era.
The central goal is to facilitate safe and responsible use of personal information for AI development through measures such as encouraging the use of pseudonymized data for AI training and promoting frameworks to make pseudonymization more accessible. Another measure encourages proposing amendments to the PIPA to permit the processing of lawfully collected personal information for secondary purposes — including AI development — under certain safeguards. Additionally, the PIPC published “Guidelines on Publicly Available Personal Information for AI Development and Services” in July 2024.
At the same time, from a privacy protection perspective, the PIPC issued guidelines in December 2024 in the form of the “AI Privacy Risk Management Model for Safe Use of AI and Data” to help data controllers identify and mitigate privacy risks associated with the development and deployment of AI technologies.
Copyright and intellectual property
South Korea’s copyright framework is still evolving in response to the unique challenges posed by AI development. In particular, there is ongoing legislative discussion regarding whether text and data mining for AI training qualifies as fair use under the Copyright Act. At present, there is no binding precedent or regulatory guidance clarifying this issue.
The Korea Copyright Commission is working on clarifying authorship and copyright recognition for AI-generated and AI-assisted works and developing standards for the use of copyrighted materials in AI training datasets. The KCC is also promoting the enactment of the Right of Publicity Act to regulate the commercial use of individuals’ names, likenesses and voices, particularly in the context of deepfakes and AI-generated content.
Antitrust policy
While enforcement activity in the AI space remains nascent, the Korea Fair Trade Commission is closely monitoring potential competition issues in AI markets. Following a market survey involving more than 50 major domestic and global AI firms, the KFTC published a policy report, “Generative AI and Competition” in December 2024. The report identified key antitrust concerns, including barriers to entry in AI markets, the structures of vertical and horizontal competition, and concerns over data and infrastructure monopolization.
The KFTC is currently reviewing potential regulatory reforms based on the report and plans to conduct further studies on anti-competitive practices related to data collection and usage in AI systems.
Consumer protection
To protect consumers from misleading claims about AI capabilities, also known as AI washing, the KFTC is collaborating with the Korea Consumer Agency to monitor such practices. Where the KCA identifies deceptive advertising in violation of the Act on Fair Labeling and Advertising, it may issue corrective recommendations.
While the recommendations do not carry binding legal force, noncompliance may prompt the KFTC to initiate formal investigations, which can lead to administrative fines. In addition, false or exaggerated advertising may result in criminal penalties of up to two years' imprisonment or a fine of up to KRW150 million, approximately USD108,500; consumers may also pursue civil remedies under applicable law.
Health care sector
The Ministry of Food and Drug Safety has taken a proactive stance in regulating AI-powered medical devices. In January 2025, the KFDA released the world’s first Guideline for Approval and Examination of Generative AI Medical Devices, setting the approval and evaluation standards for such devices. It also co-published the Guiding Principles for conducting Clinical Trial for Machine Learning-enabled Medical Devices in collaboration with Singapore’s Health Sciences Authority.
Meanwhile, AI-based medical devices are expected to be classified as high-impact AI once the AI Basic Act comes into force in January 2026, thereby becoming subject to stricter compliance obligations.
Financial sector
AI is increasingly being utilized in South Korea’s financial sector for credit assessment, fraud detection, customer service, and investment and risk management. Although there is currently no AI-specific legislation in place within the financial sector, the Financial Services Commission and the Financial Supervisory Service have issued a series of guidance documents — including the AI Guidelines for the Financial Sector, the Guidelines for AI Development and Utilization and AI Security Guidelines — to address the evolving financial landscape shaped by AI adoption.
These guidelines promote responsible AI implementation by emphasizing core principles such as fairness, transparency, and ethics, as well as requirements for data validation and internal controls. They also provide direction on high-risk applications, including AI-based credit scoring, which may fall under the category of high-impact AI in the AI Basic Act.
Latest developments and next steps
AI has emerged as a national priority under the current presidential administration. The government has ambitions to build a comprehensive national support system for AI research and development talent cultivation and an infrastructure to foster AI as a strategic industry.
Recent developments in AI policy include a proposed three-year grace period for regulatory enforcement under the AI Basic Act and plans to strengthen the National Artificial Intelligence Council. The proposed Special Act on Fostering and Supporting the Artificial Intelligence Industry outlines comprehensive support measures for AI-related enterprises. Additionally, recent legislative changes have broadened permissible use of personal information, including raw data. These changes apply when pseudonymization alone is insufficient to achieve research objectives and introduce sector-specific standards for AI and data processing.
South Korea’s AI regulatory regime remains growth-oriented but is rapidly evolving. Policymakers are striving to strike a balance between fostering innovation and safeguarding public interests. Given the fast pace of legislative developments and the wide range of affected sectors, stakeholders should closely monitor South Korea’s evolving AI regulatory trajectory.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
- Australia
- Canada
- China
- European Union
- India
- Japan
- Singapore
- South Korea
- United Arab Emirates
- United Kingdom
- United States
- Supplementary article: AI governance in the agentic era
-
expand_more
Additional AI resources