NVIDIA DGX H100
- 8X NvidiaH100 Gpus With 640 Gigabytes of total gpu memory
- 4x Nvidia Nvswitches
- 8X Nvidia Connect-7 and 2x Nvidia bluefield dpu 400 gigabits-per-second network interface
- Dual x86 CPUs and 2 Terabytes of system memory
- 30 Terabytes NVME SSD
About
Artificial intelligence has become the go-to approach for solving difficult business
challenges. Whether improving customer service, optimizing supply chains, extracting
business intelligence, or designing cutting-edge products and services across nearly
every industry, AI gives organizations the mechanism to realize innovation. And as a
pioneer in AI infrastructure, NVIDIA DGX™ systems provide the most powerful and
complete AI platform for bringing these essential ideas to fruition.
NVIDIA DGX H100 powers business innovation and optimization. The latest iteration of
NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD™, DGX
H100 is an AI powerhouse that features the groundbreaking NVIDIA H100 Tensor Core
GPU. The system is designed to maximize AI throughput, providing enterprises with a
highly refined, systemized, and scalable platform to help them achieve breakthroughs
in natural language processing, recommender systems, data analytics, and much
more. Available on-premises and through a wide variety of access and deployment
options, DGX H100 delivers the performance needed for enterprises to solve the
biggest challenges with AI.
Specification
You May Also Like
Related products
-
NVIDIA DGX B200
SKU: 900-2G133-0010-000-1- 8x NVIDIA Blackwell GPUs
- 1,440GB total GPU memory
- 72 petaFLOPS training and 144 petaFLOPS inference
- 2 Intel® Xeon® Platinum 8570 Processors
-
NVIDIA DGX H200
SKU: 900-2G133-0010-000-1-1- 8x NVIDIA H200 GPUs with 1,128GBs of Total GPU Memory
- 4x NVIDIA NVSwitches™
- 10x NVIDIA ConnectX®-7 400Gb/s Network Interface
- Dual Intel Xeon Platinum 8480C processors
- 30TB NVMe SSD
-
NVIDIA DGX GH200
SKU: DGX GH200-1- 32x NVIDIA Grace Hopper Superchips, interconnected with NVIDIA NVLink
- Massive, shared GPU memory space of 19.5TB
- 900 gigabytes per second (GB/s) GPU-to-GPU bandwidth
- 128 petaFLOPS of FP8 AI performance
Our Customers
























