OpenAI has started renting artificial intelligence chips from Google to support ChatGPT and other services, according to a source familiar with the matter.
While OpenAI remains one of the biggest buyers of Nvidia GPUs—used for both training AI models and inference, where the models apply learned data to make predictions—it is now also tapping into Google’s AI hardware to expand its computing capacity, according to Reuters.
OpenAI is turning to Google Cloud to help meet its rising demand for computing power, Reuters exclusively reported earlier this month—an unexpected move that brings together two fierce competitors in the AI industry.
For Google, the partnership reflects its ongoing effort to make its custom tensor processing units available to more outside clients.
Once reserved mainly for internal projects, these chips have now attracted major partners like Apple, as well as AI startups Anthropic and Safe Superintelligence—both founded by former OpenAI executives and now competing with ChatGPT.
OpenAI’s decision to rent Google’s TPUs marks its first significant use of non-Nvidia chips and signals a move away from relying solely on Microsoft’s data centers, despite the tech giant being a key backer.
According to The Information, which first reported the development, the shift could position TPUs as a more cost-effective alternative to Nvidia’s GPUs.
The report also noted that OpenAI aims to use the TPUs—rented via Google Cloud—to reduce the cost of inference, the process of running trained AI models to generate outputs.
However, Google is not providing OpenAI with access to its most advanced TPUs, according to The Information, which cited a Google Cloud employee.
Google declined to comment on the matter, while OpenAI did not immediately respond to request for comment.