Meta's vision of 'silicon sovereignty' seems to be hitting a significant roadblock.
Recent developments strongly suggest that Meta is strategically pivoting back to external suppliers amid challenges with its in-house AI chip program, known as MTIA (Meta Training and Inference Accelerator). This shift is highlighted by two major deals. First, Meta expanded its multi-year partnership with Nvidia to secure millions of their next-generation GPUs. Second, just days later, they signed a multi-billion dollar deal with AMD for custom GPUs starting in the second half of 2026. These moves are classic hedging strategies, indicating that Meta cannot solely rely on its internal timeline to keep its AI ambitions on track.
The reasons for this pivot trace back to clear technical and logistical hurdles. Meta's own engineers acknowledged "scaling challenges" last year, particularly with advanced packaging and physical size limits of a chip (reticle limits). Furthermore, reports indicate that the upcoming MTIA-3 chip uses a highly complex manufacturing process (TSMC's 3nm with CoWoS-S packaging), which is notorious for potential delays, lower-than-expected production yields, and cost overruns. These are not trivial issues when trying to build chips at the massive scale Meta requires.
This technical reality is directly reflected in Meta's financial planning. The company announced a staggering capital expenditure (capex) forecast of $115–$135 billion for 2026, a massive increase from previous years. This budget is primarily to fund AI infrastructure. Essentially, Meta is choosing to spend heavily on proven, market-ready GPUs from Nvidia and AMD. This ensures their AI models can continue to be trained without interruption, buying valuable time for their internal MTIA program to mature and overcome its current design and manufacturing challenges.
In conclusion, while the MTIA project is far from abandoned, its role has been clarified. For the foreseeable future, it is unlikely to replace top-tier GPUs from Nvidia for cutting-edge AI training. Instead, MTIA chips will likely be used for more specialized, high-efficiency tasks like powering ad recommendations and content ranking, while the heavy lifting of frontier AI development remains in the hands of its external partners.
- Capex: Short for Capital Expenditure, it refers to the money a company spends to buy, maintain, or upgrade physical assets like buildings, technology, or equipment.
- Inference: The process of using a trained AI model to make predictions or decisions on new, unseen data. This is different from 'training', which is the process of creating the model.
- CoWoS (Chip-on-Wafer-on-Substrate): An advanced semiconductor packaging technology that allows multiple chips to be integrated together on a single base, enabling higher performance and efficiency.