American software company, Nvidia on Tuesday, unveiled a new configuration for its cutting-edge AI chips that is intended to accelerate generative AI applications.
According to Ian Buck, vice president of hyperscale and HPC at Nvidia, the new Grace Hopper Superchip increases the amount of high-bandwidth memory, enabling the architecture to support larger AI models.
The configuration is designed to efficiently fuel generative AI applications like ChatGPT by performing AI inference functions.
With its Grace Hopper Superchip, Nvidia connects one of its H100 graphics processing units with a central processor of its own design.
The generative AI apps that can create text and visuals that resemble those of humans are powered by underlying AI models that are constantly expanding in size.
As model sizes grow, they need more memory to function, which lowers performance because it prevents connecting distinct chips and systems.
The release of the updated GH200 configuration is expected next year