SK Group Chairman Chey Tae-won's recent remarks have fundamentally reframed South Korea's artificial intelligence strategy.
The core of his argument is a massive infrastructure gap. He calls for 10 to 30 gigawatts (GW) of AI-grade data center capacity, a target that exposes the national energy plan as critically undersized. Just days before his statement, the government’s 12th Basic Power Plan projected a mere 4.0 GW of demand from all data centers by 2040. This isn't just a difference in numbers; it's a fundamental disagreement on the scale of ambition required to compete globally.
This urgency is driven by global trends. First, the world is already in a full-sprint infrastructure race. The International Energy Agency (IEA) reported that data center electricity use surged in 2025, with hyperscalers like Google and Amazon spending over $400 billion on capital expenditures (capex) and planning to increase that by another 75% in 2026. Chey’s call is a direct response to this acceleration, positioning Korea’s current trajectory as falling dangerously behind.
Second, the physical constraints are real and structural. Chey himself warned earlier that shortages of memory and wafers could persist until 2030. This reframes the high prices and long lead times for GPUs and High Bandwidth Memory (HBM) not as a temporary market fluctuation, but as a long-term bottleneck. The challenge isn't just buying chips; it's securing a reliable supply in a world where demand far outstrips production capacity.
Finally, geopolitics adds another layer of risk. Ongoing U.S. export controls on advanced AI chips and China’s tightening regulations create uncertainty and disrupt global supply chains. This strengthens the case for building 'sovereign AI' capacity—infrastructure that is located and controlled domestically to ensure reliable access.
In conclusion, the narrative has shifted decisively. Korea’s AI challenge is no longer primarily about software talent or algorithms. It's about mobilizing immense capital—potentially over a trillion dollars—and making bold policy decisions on energy and grid infrastructure. The question is whether the country can move fast enough to build the physical foundation for its AI ambitions.
- Hyperscaler: A large-scale cloud computing provider that operates massive data centers, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
- Capex (Capital Expenditure): Funds used by a company to acquire, upgrade, and maintain physical assets such as property, plants, buildings, technology, or equipment.
- HBM (High Bandwidth Memory): A high-performance RAM interface for 3D-stacked memory, crucial for training large AI models due to its ability to move data quickly.
