Skip to main content

Nvidia’s first CPU is here and powering next-gen cloud gaming

During Computex 2022, Nvidia announced the upcoming release of its first system reference designs powered by the Nvidia Grace CPU. Upon launch, Nvidia’s first CPU will help usher in the next generation of high-performance computing (HPC), enabling tasks such as complex artificial intelligence, cloud gaming, and data analysis.

Nvidia Grace Hopper processors.
Image used with permission by copyright holder

The upcoming Nvidia Grace CPU Superchip and the Nvidia Grace Hopper Superchip will find their way into server models from some of the most well-known manufacturers, such as Asus, Gigabyte, and QCT. Alongside x86 and other Arm-based servers, Nvidia’s chips will bring new levels of performance to data centers. Both the CPU and the GPU were initially revealed earlier this year, but now, new details have emerged alongside an approximate release date.

Although Nvidia is mostly known for making some of the best graphics cards, the Grace CPU Superchip has the potential to tackle all kinds of HPC tasks, ranging from complex AI to cloud-based gaming. Nvidia teased that the Grace Superchip will come with two processor chips connected through Nvidia’s NVLink-C2C interconnect technology.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Joined together, the chips will offer up to 144 high-performance Arm V9 cores with scalable vector extensions as well as an impressive 1TB/s memory subsystem. According to Nvidia, its new design will double the memory bandwidth and energy efficiency of current-gen server processors. Some of the use-cases for the new CPU that Nvidia lists include data analytics, cloud gaming, digital twin, and hyper-scale computing applications.

Launching alongside the Nvidia Grace is the Nvidia Grace Hopper Superchip, and although strikingly similar by name, the “Hopper” gives it away — this is not just a CPU. Nvidia Grace Hopper pairs an Nvidia Hopper graphics card with an Nvidia Grace processor, once again utilizing the same NVLink-C2C technology.

Hopper H100 graphics card.
Image used with permission by copyright holder

Combining the two has a massive effect on the speed of data transfer, making it up to 15 times faster than that of traditional CPUs. Both of the chips are impressive, but the Grace and Grace Hopper combo should be capable of facing just about any task, including giant-scale artificial intelligence applications.

The new Nvidia server design portfolio offers single baseboard systems with up to four-way configurations available. These designs can be further customized based on individual needs to match specific workloads. To that end, Nvidia lists a few systems.

The Nvidia HGX Grace Hopper system for AI training, inference, and HPC comes with the Grace Hopper Superchip and Nvidia’s BlueField-3 data processing units (DPUs). There’s also a CPU-only alternative that combines the Grace CPU Superchip with BlueField-3.

Nvidia’s OVX systems are aimed at digital twins and collaboration workloads and come with a Grace CPU chip, BlueField-3, and Nvidia GPUs that are yet to be revealed. Lastly, the Nvidia CGX system is made for cloud gaming and graphics. It pairs the Grace CPU Superchip with BlueField-3 and Nvidia’s A16 GPUs.

Nvidia’s new line of processors and HPC graphics cards is set to release in the first half of 2023. The company teased that dozens of new server models from its partners will be made available around that time.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Nvidia built a massive dual GPU to power models like ChatGPT
Nvidia's H100 NVL being installed in a server.

Nvidia's semi-annual GPU Technology Conference (GTC) usually focuses on advancements in AI, but this year, Nvidia is responding to the massive rise of ChatGPT with a slate of new GPUs. Chief among them is the H100 NVL, which stitches two of Nvidia's H100 GPUs together to deploy Large Language Models (LLM) like ChatGPT.

The H100 isn't a new GPU. Nvidia announced it a year ago at GTC, sporting its Hopper architecture and promising to speed up AI inference in a variety of tasks. The new NVL model with its massive 94GB of memory is said to work best when deploying LLMs at scale, offering up to 12 times faster inference compared to last-gen's A100.

Read more
If you have an Nvidia graphics card, your CPU may be suffering right now
Nvidia GeForce RTX 4090 GPU.

If you own one of Nvidia's best GPUs and you've noticed your PC performing worse than usual lately, the latest graphics driver may bear partial blame. That's because there's a bug with the current version of Nvidia drivers that increases CPU usage after exiting a game.

Nvidia has acknowledged the bug, and since, a fix has been released that fixed increased CPU usage with Nvidia GPUs.

Read more
The popularity of ChatGPT may give Nvidia an unexpected boost
Nvidia's A100 data center GPU.

The constant buzz around OpenAI's ChatGPT refuses to wane. With Microsoft now using the same technology to power its brand-new Bing Chat, it's safe to say that ChatGPT may continue this upward trend for quite some time. That's good news for OpenAI and Microsoft, but they're not the only two companies to benefit.

According to a new report, the sales of Nvidia's data center graphics cards may be about to skyrocket. With the commercialization of ChatGPT, OpenAI might need as many as 10,000 new GPUs to support the growing model -- and Nvidia appears to be the most likely supplier.

Read more