Products
-
Intel® Xeon® Platinum 8253
SKU: N/A- # of Cores: 16
- # of Threads: 32
- Processor Base Frequency: 2.20 GHz
- Cache: 22 MB
-
Intel® Xeon® Platinum 8256
SKU: N/A- # of Cores: 4
- # of Threads: 8
- Processor Base Frequency: 3.80 GHz
- Cache: 16.5 MB
-
Intel® Xeon® Platinum 8260
SKU: N/A- # of Cores: 24
- # of Threads: 48
- Processor Base Frequency: 2.40 GHz
- Cache: 35.75 MB
-
Intel® Xeon® Platinum 8268
SKU: N/A- # of Cores: 24
- # of Threads: 48
- Processor Base Frequency: 2.90 GHz
- Cache: 35.75 MB
-
Intel® Xeon® Platinum 8270
SKU: N/A- # of Cores: 26
- # of Threads: 52
- Processor Base Frequency: 2.70 GHz
- Cache: 35.75 MB
-
Intel® Xeon® Platinum 8276
SKU: N/A- # of Cores: 28
- # of Threads: 56
- Processor Base Frequency: 2.20 GHz
- Cache: 38.5 MB
-
Intel® Xeon® Platinum 8280
SKU: N/A- # of Cores: 28
- # of Threads: 56
- Processor Base Frequency: 2.70 GHz
- Cache: 38.5 MB
-
NVIDIA DGX-2
SKU: N/A- WORLD’s FIRST 2 PETAFLOPS SYSTEM
- ENTERPRISE GRADE AI INFRASTRUCTURE
- 12 TOTAL NVSWITCHES + 8 EDR INFINIBAND/100 GbE ETHERNET
- 2 INTEL PLATINUM CPUS + 1.5 TB SYSTEM MEMORY + DUAL 10/25 GbE ETHERNET
- 30 TB NVME SSDS INTERNAL STORAGE
-
NVIDIA DGX-1
SKU: N/A- EFFORTLESS PRODUCTIVITY
- NVIDIA TESLA V100 + NEXT GENERATION NVIDIA NVLINK
- TWO INTEl XEON CPUs + QUAD EDR IB
- THREE-RACK-UNIT ENCLOSURE
-
Alveo U200 Data Center Accelerator Card
SKU: N/A- Up to 90X higher performance than CPUs on key workloads at 1/3 the cost
- Over 3X higher inference throughput and 3X latency advantage over GPU-based solutions
-
Alveo U50 Data Center Accelerator Card
SKU: N/A- Faster application performance with 8GB HBM memory and PCIe Gen4 interconnect
- Low latency network capability through 100G networking with support for 4x 10GbE, 4x 25GbE, or 1x 40GbE or 1x 100GbE
-
Caffe
SKU: N/ACaffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without ...More Information -
CUDA
SKU: N/ACUDA is a parallel computing platform and programming model developed by Nvidia for general computing on its own GPUs (graphics processing units). CUDA enables developers to speed up compute-intensive applications by harnessing the power of GPUs for the parallelizable part of the computation. This post is a super simple introduction to CUDA, the popular parallel computing ...More Information -
cuDNN
SKU: N/ANVIDIA cuDNN The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them ...More Information -
DOCKER
SKU: N/ADocker is the de facto developer standard for building and sharing apps that enable simplicity, agility and choice for software development across any infrastructure so that you can get your job done and deploy your applications faster. Docker provides developer-friendly, CLI-based workflow and makes it easy to build, share, and run containerized applications. Even your most complex applications can be containerized. You can build locally, deploy to the cloud, and run anywhere. -
NVIDIA DGX Station
SKU: DGXS-2511C+P2CMI00- Four NVIDIA TESLA V100 GPU
- Next Generation NVIDIA NVLINK
- Water Cooling
- 1/20 Power CONSUMPTION
- Pre-installed standard Ubuntu 14.04 w/ Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 8.0
-
Pytorch
SKU: N/AProduction Ready Transition seamlessly between eager and graph modes with TorchScript, and accelerate the path to production with TorchServe. Distributed Training Scalable distributed training and performance optimization in research and production is enabled by the torch.distributed backend. Robust Ecosystem A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP ...More Information -
RAPIDS
SKU: N/AACCELERATED DATA SCIENCE The RAPIDS suite of open source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. SCALE OUT ON GPUS Seamlessly scale from GPU workstations to multi-GPU servers and multi-node clusters with Dask. PYTHON INTEGRATION Accelerate your Python data science toolchain with minimal code changes ...More Information -
TENSORFLOW
SKU: N/ATensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. Easy model building TensorFlow offers multiple levels of abstraction so you can choose the right one ...More Information -
THEANO
SKU: N/ATheano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays (numpy.ndarray). Using Theano it is possible to attain speeds rivaling hand-crafted C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking ...More Information
Our Partners



Previous
Next