NVIDIA HGX B100
- HGX B100 8-GPU
- 8x NVIDIA B100 SXM
- NVIDIA NVLink (Fifth generation)
- NVIDIA NVSwitch™ (Fourth generation)
About
Purpose-Built for AI and HPC
AI, complex simulations, and massive datasets require multiple GPUs with extremely fast interconnections and a fully accelerated software stack. The NVIDIA HGX™ AI supercomputing platform brings together the full power of NVIDIA GPUs, NVLink®, NVIDIA networking, and fully optimized AI and high-performance computing (HPC) software stacks to provide the highest application performance and drive the fastest time to insights.
Specification
HGX B100
GPUs
HGX B100 8-GPU
Form factor
8x NVIDIA B100 SXM
FP4 Tensor Core
112 PFLOPS
FP8/FP6 Tensor Core
56 PFLOPS
INT8 Tensor Core
56 POPS
FP16/BF16 Tensor Core
28 PFLOPS
TF32 Tensor Core
14 PFLOPS
FP32
480 TFLOPS
FP64
240 TFLOPS
FP64 Tensor Core
240 TFLOPS
Memory
Up to 1.5TB
NVIDIA NVLink
Fifth generation
NVIDIA NVSwitch™
Fourth generation
GPU-to-GPU bandwidth - 1.8TB/s
Total aggregate bandwidth
14.4TB/s
You May Also Like
Related products
-

NVIDIA TESLA P100
SKU: N/AMore Information- GPU Memory: 16 CoWoS HBM2
- CUDA Cores: 3584
- Single-Precision Performance: 9.3 TeraFLOPS
- System Interface: x16 PCIe Gen3
-

NVIDIA A100 PCIE GPU
SKU: 900-21001-0000-000More Information- GPU Memory: 40 GB
- Peak FP16 Tensor Core: 312 TF
- System Interface: PCIe
-

NVIDIA H200 NVL
SKU: 900-21010-0040-000The GPU for Generative AI and HPC The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for ...More Information
Our Customers





























Previous
Next
