LG AI Research has announced its new vision-language model, EXAONE 4.5.
At its core, EXAONE 4.5 is a multimodal AI designed to be an expert at understanding complex business documents. Think of it as an AI that can not only read text but also comprehend the layouts, charts, and diagrams in contracts, blueprints, and financial statements. LG claims it even outperforms well-known models like OpenAI's GPT-5 mini and Anthropic's Claude Sonnet 4.5 in these specific tasks. Furthermore, LG plans to release it as an open-weight model for research and academic purposes, allowing others to build upon their work.
This announcement is significant for several reasons. First, it aligns with South Korea's national strategy to develop 'Sovereign AI'. The goal is to create homegrown AI foundation models to reduce reliance on foreign technology and build a self-sufficient ecosystem. LG's consistent open-weight releases are a key part of this, fostering a community of local developers and researchers.
Second, this development is happening amid fierce global competition. The AI landscape is evolving rapidly, with giants like OpenAI, Anthropic, and Alibaba constantly releasing more powerful models. By claiming superior performance, LG is making a bold statement about its technological capabilities and its intent to compete on the world stage, especially in the niche of enterprise document understanding.
Third, and perhaps most exciting, is the connection to LG's long-term vision of 'Physical Intelligence'. The ultimate goal isn't just to create a smart chatbot. It's to build the cognitive engine for physical robots and devices. At CES 2026, LG showcased its CLOiD home robot, and EXAONE 4.5 is designed to be the brain that allows such robots to perceive, reason, and interact with the real world—for instance, by reading an instruction manual to assemble furniture or visually inspecting a product for defects.
This progress was made possible by strong foundational support. A national partnership with NVIDIA secured the necessary GPU computing power, while government funds and projects have created a fertile ground for large-scale AI research. In essence, EXAONE 4.5 stands at the crossroads of national policy, open-source collaboration, and a forward-looking vision for AI that extends beyond the screen and into our physical lives.
- Multimodal AI: An AI that can process and understand information from multiple types of data, such as text, images, and charts, at the same time.
- Sovereign AI: A national strategy to develop and control a country's own artificial intelligence foundation models and infrastructure, ensuring technological independence.
- Open-weight model: An AI model whose underlying parameters (weights) are publicly released, allowing researchers and developers to study, modify, and build upon it.
