Products
-
NVIDIA DGX A100
SKU: DGXA-2530A+P2CMI00- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 9X MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
- DUAL 64-CORE AMD CPUs AND 1 TB SYSTEM MEMORY
- 15 TB GEN4 NVME SSD
-
NVIDIA DGX H100
SKU: DGX H100- 8X NvidiaH100 Gpus With 640 Gigabytes of total gpu memory
- 4x Nvidia Nvswitches
- 8X Nvidia Connect-7 and 2x Nvidia bluefield dpu 400 gigabits-per-second network interface
- Dual x86 CPUs and 2 Terabytes of system memory
- 30 Terabytes NVME SSD
-
NVIDIA DGX STATION A100 320GB/160GB
SKU: DGXS-2080C+P2CMI00- 2.5 petaFLOPS of performance
- World-class AI platform, with no complicated installation or IT help needed
- Server-grade, plug-and-go, and doesn’t require data center power and cooling
- 4 fully interconnected NVIDIA A100 Tensor Core GPUs and up to 320 gigabytes (GB) of GPU memory
-
NVIDIA HGX A100 (8-GPU)
SKU: N/A- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 320 GB MEMORY
- 4.8 TB/s TOTAL AGGREGATE BANDWIDTH
-
8 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-GS4845- GPU : 8 NVIDIA A100, V100, RTXA6000, RTX8000, A40
- NVLINK : 4 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- Type A: 12 3.5" SATA/NVMe U.2 Hotswap bays
- Type B: 24 2.5" SATA/SAS NVMe U.2 Hotswap bays
-
NVIDIA A10
SKU: 900-2G133-0000-000-1- GPU Memory: 24 GB GDDR6
- GPU Memory Bandwidth: 600GB/s
- PCI Express Gen 4
-
NVIDIA A30
SKU: 900-2G133-0000-000-1-1- GPU Memory: 24GB HBM2
- GPU Memory Bandwidth: 933GB/s
- PCI Express Gen 4
-
NVIDIA A40
SKU: 900-2G133-0000-000- GPU Memory: 48 GB GDDR6 with error-correcting code (ECC)
- GPU Memory Bandwidth: 696 GB/s
- PCI Express Gen 4
-
NVIDIA RTX A4000
SKU: 900-5G190-2550-000- GPU Memory: 16GB GDDR6 with error-correcting code (ECC)
- 4x DisplayPort 1.4
- PCI Express Gen 4 x 16
-
NVIDIA RTX A5000
SKU: 900-5G132-2500-000- GPU Memory: 24GB GDDR6 with error-correcting code (ECC)
- 4x DisplayPort 1.4
- PCI Express Gen 4 x 16
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXB7119FT83- GPU: 10 NVIDIA A100, A40, A30, V100, RTXA6000, RTXA5000
- NVLINK: 4 NVLINK
- CPU: 80 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 8 TB (32 DIMM)
- STORAGE: 12 3.5" SATA SSD/HDD OR NVMe PCIe U.2
-
2 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8252T75- GPU : 2 NVIDIA RTXA6000, A40, RTX8000, T4
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- 26 2.5" SATA/NVMe U.2 SSD Hotswap bays, 2 NVMe M.2 SSD
-
4 GPU 1 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8021G88- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 64 CORES (1 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 2 2.5" SATA SSD Hotswap bays, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 1 XEON DEEP LEARNING AI SERVER
SKU: SMXB5631G88- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 28 CORES (1 Intel Xeon Scalable)
- System Memory: 1.5 TB (12 DIMM)
- STORAGE: 2 2.5" SSD, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-B8251- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK : 2 to 6 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 8 3.5" SATA/NVMe U.2 Hotswap bays
-
4 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXESC4000G4- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK: 2 NVLINK
- CPU: 56 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 2 TB (16 DIMM)
- STORAGE: 8 3.5" SATA SSD/HDD OR NVMe U.2
-
SMX STATION A100
SKU: SMX STATION A100- GPUs: 4x NVIDIA A100 80 GB GPUs
- GPU Memory: 320 GB total
- NVLink: 6 NVLink Bridge max 600 Gbytes per second
- System Power Usage: 1.5 kW at 100–240 Vac
- CPU: Single AMD 7742, 64 cores, 2.25 GHz (base)–3.4 GHz (max boost)
- System Memory: 512 GB DDR4
- Networking: Dual-port 10Gbase-T Ethernet LAN, Dual-port 1Gbase-T Ethernet, BMC management port
- Storage: OS: 1x 1.92 TB NVME drive, Internal storage: 7.68 TB U.2 NVME drive
- Software: Ubuntu Linux OS, NGC Package
-
2 GPU 2 EPYC DEEP LEARNING AI WORKSTATION
SKU: SMX-DE2- GPU: 2 RTX8000, RTXA6000, RTX3090
- 1 NVLINK (Optional)
- CPU: 128 CORES (2 AMD EPYC ROME)
- System Memory: 2 TB (16 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 CORE X DEEP LEARNING AI WORKSTATION
SKU: SMX-SC4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK(Optional)
- CPU: 18 CORES (1 Intel CORE X)
- System Memory: 256GB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 EPYC DEEP LEARNING AI WORKSTATION
SKU: SMX-SE4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK (OPTIONAL)
- CPU: 64 CORES (1 AMD EPYC ROME)
- System Memory: 1 TB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 THREADRIPPER DEEP LEARNING AI WORKSTATION
SKU: SMX-ST4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK(Optional)
- CPU: 64 CORES (1 AMD THREADRIPPER)
- System Memory: 256 GB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 1 XEON DEEP LEARNING AI WORKSTATION
SKU: SMX-SX4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK (Optional)
- CPU: 28 CORES (1 Intel Xeon W)
- System Memory: 1 TB (8 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 GPU 2 XEON DEEP LEARNING AI WORKSTATION
SKU: SMX-DX4- GPU: 4 RTX8000, RTXA6000, RTX3090
- 2 NVLINK (Optional)
- CPU: 56 CORES (2 Intel Xeon Scalable)
- System Memory: 1.5TB (12 DIMM)
- 12 3.5" SSD/HDD, 2 NVMe M.2
-
4 NODES in 2U EPYC Server
SKU: N/A- 2U chassis with 4 node support 16x 2.5'' HDD, 1600W Redundant (1+1) PSU
- Single AMD EPYC™ 7002 Processor family
- 8 DIMM Slots, Supports Eight-Channel DDR4 3200/2933 R DIMM (Modules up to 64GB Supported), and LR DIMM (Modules up to 256GB Supported)
- Supports 4 x 2.5" HDD/SSD per node (all SATA or 2 x NVME* + 2 x SATA)
- Supports 2x PCIe4.0 x 16, 2x M.2 slots per node
- Integrated IPMI 2.0 and KVM with Dedicated LAN
- Supports OCP 3.0 PCIe4.0 x 16 mezzanine card
-
4 Nodes in 2U Xeon Servers
SKU: N/A- 2U chassis with 4 node support 16x 2.5'' HDD, 1600W Redundant (1+1) PSU
- Dual Socket Intel Xeon Scalable Processors and 2nd Gen Intel Xeon Scalable Processors
- Supports Six channel DDR4 2666/2400 RDIMM, LRDIMM, 16 x DIMM slots
- Supports 4 x 2.5" HDD/SSD per node (all SATA or 2 x NVME + 2 x SATA)
- Supports 2 x PCIe3.0 x 16 per node. Supports OCP 3.0 PCIe3.0 x 16 mezzanine card
-
STOREMATRIX® S100 – 1600TB
SKU: SMX-S100- 100 x 16TB = 1,600TB
- OpenZFS, Linux or Windows Storage Server
- Dual Xeon, RAM up to 2TB
- 10/25/100/200Gb/s LAN or Infiniband
- 4U Rackmount
- Expandable Capacity
-
STOREMATRIX® S112 – 192TB
SKU: SMX-S112- 12 x 16TB = 192 TB
- OpenZFS, Linux or Windows Storage Server
- 1 CPU, RAM up to 1.5TB
- 10/25/100/200Gb/s LAN or Infiniband
- 1U Rackmount
- Expandable Capacity
-
STOREMATRIX® S12F – 12 NVMe U.2 FLASH
SKU: SMX-S12F- 12 x 15TB = 184 TB NVMe U.2 FLASH
- OpenZFS, Linux or Windows Storage Server
- 1 CPU, RAM up to 2TB
- 10/25/100/200Gb/s LAN or Infiniband
- 1U Rackmount
- Expandable Capacity
-
STOREMATRIX® S16 – 256TB
SKU: SMX-S16- 16 x 16TB = 256 TB
- OpenZFS, Linux or Windows Storage Server
- 1 or 2 CPU, RAM up to 2TB
- 10/25/100/200Gb/s LAN or Infiniband
- 3U Rackmount
- Expandable Capacity
-
STOREMATRIX® S24 – 384TB
SKU: SMX-S24- 24 x 16TB = 384 TB
- OpenZFS, Linux or Windows Storage Server
- 1 or 2 CPU, RAM up to 2TB
- 10/25/100/200Gb/s LAN or Infiniband
- 4U Rackmount
- Expandable Capacity
Our Partners



Previous
Next