Trusted Performance

4U GPU Servers Unrivaled GPU Systems

Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications

Servers and Storage Solutions
Rack Mount Servers

Universal GPU systems

Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, or 8U for Large Scale Al training and HPC Applications

  • GPU: NVIDIA HGX H100/A1004-GPU/8-GPU AMD Instinct M1300X/M1250 OAM Accelerator, Intel Data Center GPU Max Series
  • CPU: Intel Xeon or AMD EPYC
  • Memory:Up to 32 DIMMs, 8TB
  • Drive: Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives

Rack Mount Servers

Liquid-Cooled Universal GPU Systems

Direct-to-chip liquid-cooled systems for high-density Al infrastructure at scale.

  • GPU: NVIDIA HGX H100 4-GPU/8-GPUs
  • CPU: Intel Xeon or AMD EPYC
  • Memory: Up to 32 DIMMs, 8TB
  • Drive: Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives

Rack Mount Servers

4U/5U GPU Lines with PCIe 5.0

Maximum Acceleration and Flexibility for Al/Deep Learning and HPC Applications

  • GPU: Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
  • CPU: Intel Xeon or AMD EPYC
  • Memory: Up to 32 DIMMs. BTB DRAM or 12TB DRAM - PMem
  • Drive: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe

Rack Mount Servers

AMD APU Systems

Multi-processor system combining CPU and GPU, Designed for the Convergence of Al and HPC

  • GPU: 4 AMD Instinct M1300A Accelerated Processing Unit (APU)
  • CPU: AMD Instinct M1300A Accelerated Processing Unit (APU)
  • Memory: Up to 512GB integrated HBM3 memory (4x 128GB)
  • Drive: Up to 8 2.5" NVMe or Optional 24 2.5" SATA/SAS via storage add-on card-2 M.2 drives

Rack Mount Servers

4U GPU with NVLink and PCIe 4.0

Flexible Design for Al and Graphically Intensive Workloads, Supporting Up to 10 GPUs

  • GPU: NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
  • CPU: Intel Xeon or AMD EPYC
  • Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM-PMem
  • Drive: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe

Rack Mount Servers

4U GPU Lines with PCIe 4.0

Flexible Design for Al and Graphically Intensive Workloads, Supporting Up to 10 GPUs

  • GPU: NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
  • CPU: Intel Xeon or AMD EPYC
  • Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM-PMem
  • Drive: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe

4U/5U GPU Lines with PCIe 5.0

Connect with an Expert

300 Characters Maximum