Meta revealed details on Wednesday regarding the forthcoming iteration of the company’s proprietary artificial intelligence accelerator chip.
Earlier this year, Meta announced its intentions to roll out a new iteration of a custom data center chip aimed at addressing the increasing computational demands required to operate AI-driven products across Facebook, Instagram, and WhatsApp.
Internally known as “Artemis,” this chip is poised to diminish Meta’s reliance on Nvidia’s AI chips and reduce its overall energy consumption.
In a blog post, the company stated, “This chip’s architecture is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity for serving ranking and recommendation models.”
The new Meta Training and Inference Accelerator (MTIA) chip is a component of Meta’s extensive custom silicon initiative, which encompasses exploration of other hardware systems. In addition to chip and hardware development, Meta has heavily invested in software development to effectively leverage its infrastructure’s power in the most efficient manner possible.
Furthermore, Meta is investing billions in procuring Nvidia and other AI chips. Earlier this year, CEO Mark Zuckerberg revealed plans to acquire approximately 350,000 flagship H100 chips from Nvidia. Meta intends to accumulate the equivalent of 600,000 H100 chips this year, in collaboration with other suppliers.
Taiwan Semiconductor Manufacturing Co. will manufacture the new chip utilizing its “5nm” process. Meta claims it offers three times the performance of its first-generation processor.
The chip has already been deployed in data centers and is actively involved in serving AI applications. Meta has indicated that it has several ongoing initiatives aimed at broadening the scope of MTIA, including support for generative AI workloads.