Products
-
NVIDIA DGX B200
SKU: 900-2G133-0010-000-1- 8x NVIDIA Blackwell GPUs
- 1,440GB total GPU memory
- 72 petaFLOPS training and 144 petaFLOPS inference
- 2 Intel® Xeon® Platinum 8570 Processors
-
NVIDIA DGX H200
SKU: 900-2G133-0010-000-1-1- 8x NVIDIA H200 GPUs with 1,128GBs of Total GPU Memory
- 4x NVIDIA NVSwitches™
- 10x NVIDIA ConnectX®-7 400Gb/s Network Interface
- Dual Intel Xeon Platinum 8480C processors
- 30TB NVMe SSD
-
NVIDIA DGX GH200
SKU: DGX GH200-1- 32x NVIDIA Grace Hopper Superchips, interconnected with NVIDIA NVLink
- Massive, shared GPU memory space of 19.5TB
- 900 gigabytes per second (GB/s) GPU-to-GPU bandwidth
- 128 petaFLOPS of FP8 AI performance
-
NVIDIA DGX H100
SKU: DGX H100- 8X NvidiaH100 Gpus With 640 Gigabytes of total gpu memory
- 4x Nvidia Nvswitches
- 8X Nvidia Connect-7 and 2x Nvidia bluefield dpu 400 gigabits-per-second network interface
- Dual x86 CPUs and 2 Terabytes of system memory
- 30 Terabytes NVME SSD
-
NVIDIA DGX A100
SKU: DGXA-2530A+P2CMI00- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 9X MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
- DUAL 64-CORE AMD CPUs AND 1 TB SYSTEM MEMORY
- 15 TB GEN4 NVME SSD
-
NVIDIA H200 NVL
SKU: 900-21010-0040-000The GPU for Generative AI and HPC The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for ...More Information -
NVIDIA HGX B200
SKU: HGX B200- HGX B200 8-GPU
- 8x NVIDIA B200 SXM
- NVIDIA NVLink (Fifth generation)
- NVIDIA NVSwitch™ (Fourth generation)
-
NVIDIA DGX STATION A100 320GB/160GB
SKU: DGXS-2080C+P2CMI00- 2.5 petaFLOPS of performance
- World-class AI platform, with no complicated installation or IT help needed
- Server-grade, plug-and-go, and doesn’t require data center power and cooling
- 4 fully interconnected NVIDIA A100 Tensor Core GPUs and up to 320 gigabytes (GB) of GPU memory
-
NVIDIA H100
SKU: 900-21010-0000-000Take an order-of-magnitude leap inaccelerated computing. The NVIDIA H100 Tensor Core GPU delivers unprecedented performance,scalability, and security for every workload. With NVIDIA® NVLink® SwitchSystem, up to 256 H100 GPUs can be connected to accelerate exascaleworkloads, while the dedicated Transformer Engine supports trillion-parameter language models. H100 uses breakthrough innovations in theNVIDIA Hopper™ architecture to deliver industry-leading ...More Information -
NVIDIA HGX B100
SKU: HGX B100- HGX B100 8-GPU
- 8x NVIDIA B100 SXM
- NVIDIA NVLink (Fifth generation)
- NVIDIA NVSwitch™ (Fourth generation)
-
NVIDIA HGX A100 (8-GPU)
SKU: N/A- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 320 GB MEMORY
- 4.8 TB/s TOTAL AGGREGATE BANDWIDTH
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMX-R4051- GPU: 10 NVIDIA H100, A100, L40, A40, RTX6000, 2-slot GPU
- CPU: 2 4th Generation Intel Xeon Scalable Processors
- System Memory: 8 TB (32 DIMM)
- STORAGE: NVME
-
8 H100 GPU 2 EPYC AI SYSTEM
SKU: SMX-R4041- Powered by 8 NVIDIA H100, A100, L40, A40, RTX6000, RTXA5000
- AMD EPYC 9004 processors with Dual 128 Zen 4c cores
-
NVIDIA A40
SKU: 900-2G133-0000-000- GPU Memory: 48 GB GDDR6 with error-correcting code (ECC)
- GPU Memory Bandwidth: 696 GB/s
- PCI Express Gen 4
-
NVIDIA HGX H200
SKU: HGX H200- 141GB of HBM3e GPU memory
- 4.8TB/s of memory bandwidth
- 4 petaFLOPS of FP8 performance
- 2X LLM inference performance
- 110X HPC performance
-
NVIDIA L40
SKU: 900-2G133-0010-000The NVIDIA L40 brings the highest level of power and performance for visual computing workloads in the data center. Third-generation RT Cores and industry-leading 48 GB of GDDR6 memory deliver up to twice the real-time ray-tracing performance of the previous generation to accelerate high-fidelity creative workflows, including real-time, full-fidelity, interactive rendering, 3D design, video streaming, and virtual production. -
NVIDIA RTX 4000 SFF Ada Generation
SKU: 900-5G133-2550-000-1Built on the NVIDIA Ada Lovelace architecture, the RTX 4000 SFF combines 48 third-generation RT Cores, 192 fourth-generation Tensor Cores, and 6,144 CUDA® cores with 20GB of error correction code (ECC) graphics memory. The RTX 4000 SFF delivers incredible acceleration for rendering, AI, graphics, and compute workloads. -
NVIDIA RTX 6000 Ada Generation
SKU: 900-5G133-2550-000- NVIDIA Ada Lovelace Architecture
- DP 1.4 (4)
- PCI Express 4.0 x16
-
NVIDIA RTX A2000
SKU: 900-5G192-2551-000- 6 GB | 12 GB GDDR6 with error-correction code (ECC)
- 4x mini DisplayPort 1.4
- PCI Express Gen 4 x 16
-
NVIDIA RTX A4000
SKU: 900-5G190-2550-000- GPU Memory: 16GB GDDR6 with error-correcting code (ECC)
- 4x DisplayPort 1.4
- PCI Express Gen 4 x 16
-
NVIDIA RTX A4500
SKU: 900-5G132-2550-000- 20 GB GDDR6 with error-correcting code (ECC)
- 4x DisplayPort 1.4*
- PCI Express Gen 4 x 16
-
NVIDIA RTX A5000
SKU: 900-5G132-2500-000- GPU Memory: 24GB GDDR6 with error-correcting code (ECC)
- 4x DisplayPort 1.4
- PCI Express Gen 4 x 16
-
NVIDIA RTX A5500
SKU: 900-5G132-2570-000- 24GB GDDR6 with error correction code (ECC)
- 4x DisplayPort 1.4*
- PCIe Gen 4 x 16
-
Quadro Sync II
SKU: 900-52061-0000-000Enabling up to 32 synchronized displays
Powering up to 32 4K displays for entertainment outlets, sporting events.Projector Overlay Support
Multiple projectors can be used to review design concepts or changes at scale.Stereoscopic Display Support
Making a stereoscopic 3D display wall for a research lab while utilizing up to 32 displays with just one system -
8 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-GS4845- GPU : 8 NVIDIA A100, V100, RTXA6000, RTX8000, A40
- NVLINK : 4 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- Type A: 12 3.5" SATA/NVMe U.2 Hotswap bays
- Type B: 24 2.5" SATA/SAS NVMe U.2 Hotswap bays
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXB7119FT83- GPU: 10 NVIDIA A100, A40, A30, V100, RTXA6000, RTXA5000
- NVLINK: 4 NVLINK
- CPU: 80 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 8 TB (32 DIMM)
- STORAGE: 12 3.5" SATA SSD/HDD OR NVMe PCIe U.2
-
2 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8252T75- GPU : 2 NVIDIA RTXA6000, A40, RTX8000, T4
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- 26 2.5" SATA/NVMe U.2 SSD Hotswap bays, 2 NVMe M.2 SSD
-
4 GPU 1 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8021G88- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 64 CORES (1 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 2 2.5" SATA SSD Hotswap bays, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 1 XEON DEEP LEARNING AI SERVER
SKU: SMXB5631G88- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 28 CORES (1 Intel Xeon Scalable)
- System Memory: 1.5 TB (12 DIMM)
- STORAGE: 2 2.5" SSD, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-B8251- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK : 2 to 6 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 8 3.5" SATA/NVMe U.2 Hotswap bays
Our Partners
Previous
Next