Nvidia has unveiled an Arm-based data center CPU for AI and high-performance computing it says will provide 10 times faster AI performance than one of AMD’s fastest EPYC CPUs, a move that will give the chipmaker control over compute, dispatch, and networking components in servers.
- The new data center CPU named Grace after computing programming pathfinder Grace Hopper will generate new competition for x86 CPU rivals Intel and AMD when it lands in early 2023, the launch window Nvidia CEO Jensen Huang provided during the company’s virtual GTC 2021 conference. The move came as Nvidia explored to close on its controversial $40 billion acquisition of Arm, whose CPU designs are being used for Grace.
- The Santa Clara, Calif.-based company said it has already landed two major customers for its Grace CPU: the Swiss National Supercomputing Centre and the U.S. Department of Energy’s Los Alamos National Laboratory, both of which plan to effect online Grace-powered supercomputers built by Hewlett Packard Enterprise in 2023.
- The new Nvidia CPU will allow a 2400 SPECint CPU speed rate for its next 8-core DGX deep learning server platform, Huang said. That compares to the current DGX, which has a SPECint rate of 450. Huang called that an “amazing increase in system memory bandwidth.” “Grace is Arm-based, and purpose-built for accelerated computing applications of large amounts of data such as AI,” Huang said during his GTC keynote.
- Nvidia has increasingly viewed itself as a “data center-scale computing” company that has led the corporation to seek optimization of applications at a system level. While the company has seen fast adoption of its GPUs for accelerated computing over the past few years, Nvidia extended into high-speed networking products last year with its $7 billion purchase of Mellanox Technologies to address transmission bottlenecks between systems.
Leave a Reply