The conversation around AI chips is shifting from pure processing power to the critical infrastructure that enables it: cooling and high-speed data transfer. This change signals a new investment cycle where managing heat and data bottlenecks is becoming just as important as the silicon itself.
The first major challenge is heat. Next-generation GPUs are projected to consume over 1,000 watts each, pushing server rack power densities well beyond 100 kW. Traditional air cooling simply can't keep up, hitting a physical limit around 40 kW per rack. This has made direct liquid cooling not just an option, but a necessity. The industry is moving towards solutions that bring coolant as close to the chip as possible.
This is where a key narrative shift is happening. According to semiconductor test firm Hung Ching Precision, the debate isn't about which liquid cooling technology will win. Instead, solutions like MCCP (Micro-Channel Cold Plate) and MCL (Micro-Channel Lid) will likely coexist. MCCP, which offers a significant efficiency boost and is easier to integrate, is expected to be adopted first. MCL, which has higher performance potential but is more complex to manufacture, will follow. This suggests a more pragmatic, mix-and-match approach based on specific cost and performance needs, rather than a winner-takes-all battle.
The second challenge is the data bottleneck. As models grow, the electrical copper wiring connecting chips and servers is hitting its limits in speed and power efficiency. The industry's answer is CPO (Co-packaged Optics), which integrates optical engines directly next to the main chips, converting electrical signals to light for faster, more efficient data transmission. Initial deployments are expected to start in 2026.
However, this move to optics creates a new, profitable challenge: testing. CPO requires sophisticated equipment that can test both electrical and optical signals simultaneously. This complexity dramatically increases the value and average selling price (ASP) of test solutions. Hung Ching's commentary highlights this as the real area for profit expansion, moving beyond the cooling hardware itself.
This entire shift is underpinned by recent events. Big Tech companies like Meta and Google have announced massive increases in their 2026 AI infrastructure spending. Nvidia's upcoming 'Vera Rubin' platform is designed for full liquid cooling from the ground up, and key players like Broadcom are already demonstrating the reliability of their CPO platforms. These developments confirm that the transition is well underway, lending strong support to the view that the second half of 2026 will be a major inflection point for investment in this new ecosystem.
- MCCP (Micro-Channel Cold Plate): A cooling technology where microscopic channels (80-100 μm) are carved into a metal plate placed on the chip, allowing liquid to flow through and absorb heat with high efficiency.
- MCL (Micro-Channel Lid): A more advanced cooling method where the micro-channels are etched directly into the chip's protective lid, minimizing the distance between the heat source and the coolant for maximum thermal transfer.
- CPO (Co-packaged Optics): A technology that places optical components (for sending and receiving data as light) in the same package as the main processing chip (like a GPU or switch), drastically reducing data travel distance and improving power efficiency.
