The Nvidia Grace CPU might cause Intel a slew of new problems

The Nvidia Grace CPU might cause Intel a slew of new problems

The Nvidia Grace CPU might cause Intel a slew of new problems

The next Nvidia Grace ARM CPU is expected to make some serious waves in the world of high-performance computing, and new benchmarks have emerged that may give Intel a few turbulent nights.

Nvidia made some big promises about the Grace Hopper CPU Superchip's capabilities when it was revealed at GTC 2022(opens in new tab), claiming it could deliver 1.5X more performance than two 64-core AMD EPYC processors while consuming 50% less power.

The new 144-core Arm Neoverse discrete data center CPU is essentially two CPU chips coupled by NVLink, the company's high-speed, low-latency chip-to-chip (C2C) connection.

The benchmarks Nvidia supplied to back up its claims weren't really informative since they employed a previous-gen model, but Tom's Guide was able to dig up a more relevant benchmark(opens in a new tab) from the GTC presentation between the Nvidia Grace CPU and Intel's Ice Lake.

Ian Buck, vice president of Nvidia's Accelerated Computing business unit, claims that Grace is two times quicker and 2.3 times more energy-efficient than Intel's current-gen Ice Lake processor.

It's worth mentioning that this was under a Weather Research and Forecasting (WRF) model often used in HPC, and as with other vendor-provided benchmarks, we should treat these findings with caution because they might be cherry-picked to show the best possible result.

Despite these results, Tom's Guide cautions in its analysis that Grace won't have as big of an edge against Intel's future Sapphire Rapids and AMD's Genoa, which can handle DDR5 memory and other memory characteristics that might work against Grace's benefits.

Nonetheless, Nvidia promises that by the time it debuts in early 2023, the CPU Superchip will be the fastest processor on the market, though this is unlikely to catch the attention of average users.

Grace, on the other hand, will be better suited to heavy tasks like data analytics and hyper-scale computing, but Nvidia's entry into the ARM battle adds some much-needed market diversification that might pay off outside of servers and data centers.

Could Nvidia be working on consumer processors?

The Nvidia Grace CPU might cause Intel a slew of new problems

With the release of Intel's ARC Alchemist discrete graphics cards, you'd assume that Nvidia's move into the CPU industry is a kind of retribution. In reality, Team Green declared in 2021 that it would be developing server microprocessors to compete with Intel, and despite its failed acquisition of ARM, it still has major ambitions for the ARM architectural license, which is valid for 20 years.

Given Intel's continued dominance in the server and data center businesses, its stock dropped 2% following the news, but that's not all that should worry Team Blue. During a conference call with investors, Nvidia CEO Jensen Huang stated that the instruction set will be used to create CPUs for a "broad spectrum of applications," ranging from SoCs for robots to high-end processors for supercomputers.

Nvidia claims to be able to supply CPUs based on its own unique Arm general-purpose cores in all four of its business segments: AI/HPC, data center, automotive, and, of course, gaming and professional graphics.

"We'll be releasing a slew of intriguing CPUs in the coming months, and Grace is just the beginning. Beyond that, you're going to see a lot of them "Huang stated. "We enjoy seeing CPU footprints expand, and we're ecstatic to see Arm expanding into robotics, autonomous cars, cloud computing, and supercomputing. Nvidia Arm CPUs will have access to the whole Nvidia accelerated computing platform."

Gaming on ARM still feels like a long way off, especially when compared to current-gen Intel and AMD consumer x86 CPU performance, but this latest set of results looks promising. If progress continues at its present pace, AAA games on ARM architecture may become a reality sooner than we originally expected.


Post a Comment

Previous Post Next Post