Take an order-of-magnitude leap in
The NVIDIA H100 Tensor Core GPU delivers unprecedented performance,
scalability, and security for every workload. With NVIDIA® NVLink® Switch
System, up to 256 H100 GPUs can be connected to accelerate exascale
workloads, while the dedicated Transformer Engine supports trillion-
parameter language models. H100 uses breakthrough innovations in the
NVIDIA Hopper™ architecture to deliver industry-leading conversational AI,
speeding up large language models by 30X over the previous generation.
Ready for enterprise AI?
NVIDIA H100 Tensor Core GPUs for mainstream servers come with a
five-year software subscription, including enterprise support, to the
NVIDIA AI Enterprise software suite, simplifying AI adoption with the
highest performance. This ensures organizations have access to the AI
frameworks and tools they need to build H100-accelerated AI workflows
such as AI chatbots, recommendation engines, vision AI, and more. Access
the NVIDIA AI Enterprise software subscription and related support
benefits for the NVIDIA H100 here.
Securely accelerate workloads from
enterprise to exascale.
NVIDIA H100 GPUs feature fourth-generation Tensor Cores and the
Transformer Engine with FP8 precision, further extending NVIDIA’s
market-leading AI leadership with up to 9X faster training and an
incredible 30X inference speedup on large language models. For
high-performance computing (HPC) applications, H100 triples the
floating-point operations per second (FLOPS) of FP64 and adds
dynamic programming (DPX) instructions to deliver up to 7X higher
performance. With second-generation Multi-Instance GPU (MIG),
built-in NVIDIA confidential computing, and NVIDIA NVLink Switch
System, H100 securely accelerates all workloads for every data center
from enterprise to exascale