2 GPU 2 EPYC DEEP LEARNING AI SERVER
- GPU : 2 NVIDIA RTXA6000, A40, RTX8000, T4
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- 26 2.5″ SATA/NVMe U.2 SSD Hotswap bays, 2 NVMe M.2 SSD
About
Deep learning is one of the fastest growing segments in the machine learning/artificial intelligence field. It uses algorithms to model high-level abstractions of data in order to gain meaningful insight for practical application. Such data manipulation has application in various fields, such as computer vision, speech recognition and language processing, and audio recognition.
Specification
Specification
GPU
2 NVIDIA RTXA6000, A40, RTX8000, T4
CPU
Options:
-Dual 64 CORES AMD EPYC ROME
-Dual 64 CORES AMD EPYC ROME
System Memory
4 TB (32 X 128GB DDR4-2933 ECC LRDIMM)
Storage
26 x HotSwap 2.5" SSD drive bays
2 x NVMe M.2 SSD
Optional 8 NVMe SFF-8654 U.2 ports
Optional 8 NVMe SFF-8654 U.2 ports
Network
2X 10GbE ports + 1X IPMI
Dual-Port InfiniBand or other high speed PCIe card (Optional)
System Weight
~ 20 KG
System Dimension
2U Rackmount
Dimension (D x W x H) (751mm x 19" x 2U)
Maximum Power Requirements
1,600Watts (200-240Vac input)
Redundant (1+1) PFC 80-PLUS Platinum
Operating Temperature Range
10° C ~ 35° C (50° F ~ 95° F)
Support & Warranty
Three Years on-site parts and services, NBD 8x5
You May Also Like
Related products
-
RTX PRO SERVER
SKU: SMX-R4051-1- GPU: 8 NVIDIA RTX PRO 6000 Blackwell Server Edition
- GPU MEMORY : 8 X 96GB GDDR7 with ECC = 768GB
-
4 GPU 1 XEON DEEP LEARNING AI SERVER
SKU: SMXB5631G88- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 28 CORES (1 Intel Xeon Scalable)
- System Memory: 1.5 TB (12 DIMM)
- STORAGE: 2 2.5" SSD, 2 NVMe M.2 SSD
- 1U Rackmount
-
2 H200 DEEP LEARNING AI SERVER
SKU: SMX-R4051-2- GPU: 2 NVIDIA H200 NVL 141GB 600W or 2 NVIDIA RTX PRO 6000 96GB Server 600W
- CPU: Single AMD EPYC 9965 192Cores 384T 500W Processor
- System Memory: 3TB (12 DIMM)
- STORAGE: NVME Gen 5
Our Customers





























Previous
Next