South Korea Sets AI Safety Rules
South Korea enacted an AI safety law, the second legislation in the world after the European Union.

South Korea enacted an AI safety law, the second legislation in the world after the European Union. It sets a national policy framework with a focus on risk assessment, transparency, and human oversight.
The Ministry of Science and ICT explained that the act is designed to encourage growth in the AI sector by establishing national standards for trustworthy AI aimed at balancing innovation with safety, particularly for high-impact systems. The new law covers three areas: high-impact AI; safety obligations for high-performance AI; and transparency requirements for generative AI. It will be phased in over at least one year, with a focus on consultation and education. The government won’t conduct fact-finding investigations or impose administrative sanctions during that time, the newspaper wrote.
In late 2024, the European Union’s AI Act officially went into force, requiring companies to meet transparency requirements, publish a detailed report of the content used in AI training, and conduct safety tests before launching AI products. At that time, Ericsson CEO Borje Ekholm joined other technology leaders in co-signing an open letter criticising the EU’s AI and data privacy rules, warning lawmakers that a fragmented approach will further stunt the bloc’s economic and technological advancement.