The expanded partnership covers several generations of Meta’s custom MTIA processors, starts with over a gigawatt of computing capacity, and is described as the ‘first phase of a sustained, multi-gigawatt rollout.’ The new chips will be the first custom AI silicon to use a 2-nanometer process.
Meta has expanded its partnership with chip designer Broadcom to build several generations of custom artificial intelligence processors, extending the deal through 2029 with an initial commitment of more than one gigawatt of computing capacity, sufficient to power roughly 750,000 US homes.
The companies also announced that Broadcom CEO Hock Tan would leave Meta’s board of directors when his term expires at the company’s next annual meeting and move into an advisory role focused specifically on Meta’s custom chip strategy.
Meta described the one-gigawatt commitment as “the first phase of a sustained, multi-gigawatt rollout.” The deal covers Meta’s Training and Inference Accelerator programme, known as MTIA, in which Broadcom provides chip design, packaging, and networking technology.
The first chip in the programme, the MTIA 300, already runs Meta’s ranking and recommendation systems across Facebook, Instagram, and other apps; three further chip generations are planned through 2027, designed primarily for inference, the process by which AI models respond to user queries in real time.
Broadcom confirmed separately that the new MTIA silicon will be the first custom AI chips in the industry to use a 2-nanometer manufacturing process. Broadcom’s Ethernet networking technology will also be used to connect Meta’s expanding clusters of AI computers at scale.
Mark Zuckerberg said Meta was partnering with Broadcom “across chip design, packaging, and networking to build out the massive computing foundation we need to deliver personal superintelligence to billions of people.”
The framing is consistent with Meta’s stated ambition, articulated by Zuckerberg in January, to spend up to $135 billion on capital expenditure in 2026 as it races to build AI infrastructure to compete with OpenAI and Google.
The Broadcom deal is the latest in a series of large-scale chip commitments Meta has announced this year, which already include six gigawatts of AMD GPUs, millions of Nvidia chips, custom processors designed with Arm Holdings, and capacity rented from neocloud providers including CoreWeave and Nebius.
Unlike Google’s TPUs or Amazon’s Trainium, which are offered to external cloud customers as a revenue stream, Meta’s MTIA chips are exclusively for internal use, powering the AI features and recommendation systems that underpin its advertising business.
The MTIA programme follows the path set by Google, which began producing its first custom accelerators in 2015, and represents Meta’s long-term bet that purpose-built silicon optimised for its specific workloads will outperform general-purpose GPUs from Nvidia in cost efficiency at the scale Meta operates.


