• Home  
  • Meta tests in-house AI chip to reduce reliance on Nvidia
- News

Meta tests in-house AI chip to reduce reliance on Nvidia

Meta, the parent company of Facebook, Instagram, and WhatsApp, has begun testing its first in-house chip for training artificial intelligence systems, marking a key milestone in its strategy to design custom silicon and reduce dependence on external suppliers like Nvidia. The social media giant has initiated a small-scale deployment of the chip and plans to […]

Meta tests in-house AI chip to reduce reliance on Nvidia

Meta, the parent company of Facebook, Instagram, and WhatsApp, has begun testing its first in-house chip for training artificial intelligence systems, marking a key milestone in its strategy to design custom silicon and reduce dependence on external suppliers like Nvidia.

The social media giant has initiated a small-scale deployment of the chip and plans to expand production for broader use if the tests are successful.

The move is part of Meta’s broader effort to lower its infrastructure costs as it invests heavily in AI-driven growth. The company has projected total expenses of $114 billion to $119 billion for 2025, including up to $65 billion in capital expenditures, largely allocated to AI infrastructure.

Meta’s new chip is a dedicated AI accelerator, meaning it is designed specifically for AI tasks rather than general computing. This specialized design is expected to be more power-efficient than traditional graphics processing units (GPUs), which are commonly used for AI workloads.

The chip is being produced in partnership with Taiwan Semiconductor Manufacturing Company, a leading global chip manufacturer. Meta began the testing phase after successfully completing its first “tape-out,” a crucial step in semiconductor development where an initial chip design is sent for fabrication. This process can take months and cost tens of millions of dollars, with no guarantee of success.

The new chip is part of Meta’s ongoing Meta Training and Inference Accelerator program, which has faced challenges in the past. The company previously scrapped a custom inference chip after it failed a similar test, opting instead to purchase billions of dollars worth of Nvidia GPUs in 2022.

Despite this setback, Meta introduced an MTIA chip last year for inference—the process of running AI systems in real-time—as part of its content recommendation system on Facebook and Instagram. Now, Meta executives are aiming to deploy their own chips for AI training, a more computationally intensive process that involves feeding large amounts of data into AI models to improve their performance.

The plan is to first apply the new training chip to recommendation systems and eventually expand its use to generative AI applications, such as the Meta AI chatbot.

Meta remains one of Nvidia’s largest customers, relying on its GPUs to power recommendation engines, ad targeting systems, and its Llama AI model series. However, recent developments in AI research have raised questions about the long-term viability of simply scaling up models with more data and computing power.

The AI chip industry saw significant volatility earlier this year after Chinese AI startup DeepSeek launched highly efficient, low-cost AI models that prioritize inference over raw computing power. The move triggered a brief downturn in AI stocks, with Nvidia’s shares temporarily losing up to 20% of their value before recovering.