Hewlett Packard Enterprise (NYSE:HPE) has announced the shipment of its first Nvidia Grace Blackwell-based system, the Nvidia GB200 NVL72. Designed for AI service providers and large enterprises, this cutting-edge system allows for rapid deployment of complex AI clusters with advanced direct liquid cooling for unparalleled efficiency and performance.
“AI model builders require scalability, extreme performance and rapid deployment,” said Trish Damkroger, SVP and GM of HPC & AI Infrastructure Solutions. “Our industry-leading liquid cooling expertise provides low-cost AI training and best-in-class performance.”
Large scale AI-optimized performance
The NVIDIA GB200 NVL72 is designed to handle large AI models over 1 trillion parameters with a low-latency architecture of shared memory. The system seamlessly integrates NVIDIA CPUs, GPUs, calculation and switch trays, networking, and software to accelerate workloads such as Generate AI (GenAI) training and inference.
“The initial shipment of HPE’s NVIDIA GB200 NVL72 helps businesses efficiently build, deploy and scale large AI clusters,” said Bob Pette, Vice President of Enterprise Platforms at NVIDIA. .
Key features of the NVIDIA GB200 NVL72 by HPE:
72 NVIDIA BLACKWELL GPUS & 36 NVIDIA GRACE CPUs are interconnected via NVIDIA NVLINK via total HBM3E memory up to 13.5 TB with direct liquid cooling leading bandwidth of 576 TB/s for energy efficiency and thermal management I’m
Best-in-class service and AI infrastructure support
HPE’s 50 years of liquid cooling innovation have enabled it to power eight of the top 15 most energy-efficient supercomputers in the Green500 rankings. The company offers end-to-end AI solutions with global maintenance, including:
On-site Engineering Support: Resident AI and HPC Experts Ensure System Optimization
Performance Benchmark: Fine-tuned configuration for AI and HPC Specialist Peak Efficiency
Sustainability Services: Energy and Emissions Reporting, Resource Monitoring, Green Computing Initiatives
With this launch, HPE will strengthen its leadership in AI and supercomputing, addressing genai, scientific discovery and computationally intensive workloads.