Hewlett Packard Enterprise has announced that it has shipped its first Nvidia Blackwell family-based solution, the GB200 NVL72. This rack-scale system is designed to allow service providers and businesses to quickly deploy large, complex artificial intelligence clusters with sophisticated direct liquid cooling solutions to optimize efficiency and performance. It’s there.
The GB200 NVL72 features a low latency architecture of shared memory using the latest GPU technology designed for extremely large AI models with over 1 trillion parameters in one memory space. The system provides NVIDIA CPU, GPU, computing tray, networking and software integration, and deals with heavy workloads such as generated AI model training and inference along with NVIDIA software applications.
“AI service providers and large enterprise model builders are under great pressure to provide scalability, extreme performance and quick time to deployment,” says Senior Vice President and General, HPC & AI Infrastructure Solutions Manager, HPE said. “HPE provides our customers with best-in-class performance with low cost per token training and industry-leading service expertise.”