Deep Learning HPC System
-
NVIDIA DGX A100
SKU: DGXA-2530A+P2CMI00- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 9X MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
- DUAL 64-CORE AMD CPUs AND 1 TB SYSTEM MEMORY
- 15 TB GEN4 NVME SSD
-
NVIDIA DGX STATION A100 320GB/160GB
SKU: DGXS-2080C+P2CMI00- 2.5 petaFLOPS of performance
- World-class AI platform, with no complicated installation or IT help needed
- Server-grade, plug-and-go, and doesn’t require data center power and cooling
- 4 fully interconnected NVIDIA A100 Tensor Core GPUs and up to 320 gigabytes (GB) of GPU memory
-
NVIDIA HGX A100 (8-GPU)
SKU: N/A- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 320 GB MEMORY
- 4.8 TB/s TOTAL AGGREGATE BANDWIDTH
-
8 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-GS4845- GPU : 8 NVIDIA A100, V100, RTXA6000, RTX8000, A40
- NVLINK : 4 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- Type A: 12 3.5" SATA/NVMe U.2 Hotswap bays
- Type B: 24 2.5" SATA/SAS NVMe U.2 Hotswap bays
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXB7119FT83- GPU: 10 NVIDIA A100, V100, RTX8000, RTXA6000, A40, T4
- NVLINK: 4 NVLINK
- CPU: 56 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 3 TB (24 DIMM)
- STORAGE: 12 3.5" SATA SSD/HDD OR NVMe U.2
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXB7119F77- GPU: 10 NVIDIA V100, RTX8000, RTXA6000, A40, T4
- NVLINK: 4 NVLINK
- CPU: 56 CORES (2 Intel Xeon Scalable)
- System Memory: 3 TB (24 DIMM)
- STORAGE: 14 2.5" SATA SSD OR NVMe U.2
-
2 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8252T75- GPU : 2 NVIDIA RTXA6000, A40, RTX8000, T4
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- 26 2.5" SATA/NVMe U.2 SSD Hotswap bays, 2 NVMe M.2 SSD
-
4 GPU 1 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8021G88- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 64 CORES (1 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 2 2.5" SATA SSD Hotswap bays, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 1 XEON DEEP LEARNING AI SERVER
SKU: SMXB5631G88- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 28 CORES (1 Intel Xeon Scalable)
- System Memory: 1.5 TB (12 DIMM)
- STORAGE: 2 2.5" SSD, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-B8251- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK : 2 to 6 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 8 3.5" SATA/NVMe U.2 Hotswap bays
-
4 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXESC4000G4- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK: 2 NVLINK
- CPU: 56 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 2 TB (16 DIMM)
- STORAGE: 8 3.5" SATA SSD/HDD OR NVMe U.2
-
5 GPU 2 XEON DEEP LEARNING AI WORKSTATION
SKU: SMXB7105V4- GPUs: 5 NVIDIA A100, A40, RTX8000, RTXA6000
- NVLINK: 2 NVLINK (Optional)
- CPU: 56 CORES (2 Intel Xeon Scalable)
- System Memory: 1.5 TB (12 DIMM)
- 4 3.5" SATA SSD/HDD, 2 NVMe M.2
- REDUNDANT POWER SUPPLY
-
2 GPU 2 EPYC DEEP LEARNING AI WORKSTATION
SKU: SMX-DE2- GPU: 2 RTX8000, RTXA6000, RTX3090
- 1 NVLINK (Optional)
- CPU: 128 CORES (2 AMD EPYC ROME)
- System Memory: 2 TB (16 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 CORE X DEEP LEARNING AI WORKSTATION
SKU: SMX-SC4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK(Optional)
- CPU: 18 CORES (1 Intel CORE X)
- System Memory: 256GB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 EPYC DEEP LEARNING AI WORKSTATION
SKU: SMX-SE4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK (OPTIONAL)
- CPU: 64 CORES (1 AMD EPYC ROME)
- System Memory: 1 TB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 THREADRIPPER DEEP LEARNING AI WORKSTATION
SKU: SMX-ST4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK(Optional)
- CPU: 64 CORES (1 AMD THREADRIPPER)
- System Memory: 256 GB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 XEON DEEP LEARNING AI WORKSTATION
SKU: SMX-SX4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK (Optional)
- CPU: 28 CORES (1 Intel Xeon W)
- System Memory: 1 TB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 2 XEON DEEP LEARNING AI WORKSTATION
SKU: SMX-DX4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK (Optional)
- CPU: 56 CORES (2 Intel Xeon Scalable)
- System Memory: 1.5TB (12 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 NODES in 2U EPYC Server
SKU: N/A- 2U chassis with 4 node support 16x 2.5'' HDD, 1600W Redundant (1+1) PSU
- Single AMD EPYC™ 7002 Processor family
- 8 DIMM Slots, Supports Eight-Channel DDR4 3200/2933 R DIMM (Modules up to 64GB Supported), and LR DIMM (Modules up to 256GB Supported)
- Supports 4 x 2.5" HDD/SSD per node (all SATA or 2 x NVME* + 2 x SATA)
- Supports 2x PCIe4.0 x 16, 2x M.2 slots per node
- Integrated IPMI 2.0 and KVM with Dedicated LAN
- Supports OCP 3.0 PCIe4.0 x 16 mezzanine card
-
4 Nodes in 2U Xeon Servers
SKU: N/A- 2U chassis with 4 node support 16x 2.5'' HDD, 1600W Redundant (1+1) PSU
- Dual Socket Intel Xeon Scalable Processors and 2nd Gen Intel Xeon Scalable Processors
- Supports Six channel DDR4 2666/2400 RDIMM, LRDIMM, 16 x DIMM slots
- Supports 4 x 2.5" HDD/SSD per node (all SATA or 2 x NVME + 2 x SATA)
- Supports 2 x PCIe3.0 x 16 per node. Supports OCP 3.0 PCIe3.0 x 16 mezzanine card
-
NVIDIA DGX-2
SKU: N/A- WORLD’s FIRST 2 PETAFLOPS SYSTEM
- ENTERPRISE GRADE AI INFRASTRUCTURE
- 12 TOTAL NVSWITCHES + 8 EDR INFINIBAND/100 GbE ETHERNET
- 2 INTEL PLATINUM CPUS + 1.5 TB SYSTEM MEMORY + DUAL 10/25 GbE ETHERNET
- 30 TB NVME SSDS INTERNAL STORAGE
-
NVIDIA DGX-1
SKU: N/A- EFFORTLESS PRODUCTIVITY
- NVIDIA TESLA V100 + NEXT GENERATION NVIDIA NVLINK
- TWO INTEl XEON CPUs + QUAD EDR IB
- THREE-RACK-UNIT ENCLOSURE
-
NVIDIA DGX Station
SKU: DGXS-2511C+P2CMI00- Four NVIDIA TESLA V100 GPU
- Next Generation NVIDIA NVLINK
- Water Cooling
- 1/20 Power CONSUMPTION
- Pre-installed standard Ubuntu 14.04 w/ Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 8.0
Our Partners



Previous
Next