AceleMax DGS-224A

2U Dual AMD EPYC Processor 4x Double-Width / 8x Single-Width PCIe 4.0 GPU Server

  • Supports up to 4 double-width or 8 single-width PCIe GPUs in a 2U chassis
  • Supports two AMD EPYC™ 7002 or 7003 series processors family
  • Designed for VDI, machine intelligence, deep learning, machine learning, artificial intelligence, neural network, advanced rendering and compute.

Request a Quote

Reference # Q712615

The AceleMax DGS-224A is a 2U 4/8 GPU server based on the AMD EPYC 7002 or 7003 series processor server platform. It features high speed PCIe Gen 4.0 support across the entire chassis, allowing it to support next generation GPU, FPGA, ASIC, and other hardware acceleration cards, including the latest NVIDIA A100 PCIe GPUs built on the NVIDIA Ampere architecture, T4 and Quadro® RTX GPUs. In addition to being able to support four double-wide PCIe Gen 4.0 or eight single-width GPU cards, the system has a pair of low profile PCIe Gen 4 slots which are suitable for high speed networking deployment. The system is well suited for 200 Gigabit networking technologies such as 200 Gigabit Ethernet and 200 Gb/s HDR Infiniband. The GPU bays also have MUX switches which allow single-wide cards to be installed at PCIe Gen 4.0 x8 bandwidth, which allows the system to support up to 10 expansion cards in total.

 

The front of the server features eight 3.5” drive bays. All eight bays support both PCIe Gen 4.0 x4 NVMe and SATA 6G ports. This allows the server to be configured with either high capacity low cost rotational hard drives or with high performance NVMe SSDs, or a combination of both.

NVIDIA A100 PCIe GPU

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world’s highest-performing elastic data centers for AI, data analytics and HPC applications. As the engine of the NVIDIA data center platform, the A100 GPU provides up to 20X higher performance and 2.5X AI performance than V100 GPUs, and can efficiently scale up to thousands of GPUs or be partitioned into seven isolated GPU instances with new multi-Instance GPU (MIG) capability to accelerate workloads of all sizes.

 

The NVIDIA A100 GPU features third-generation Tensor Core technology that supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. It also supports new features such as New Multi-Instance GPU, delivering optimal utilization with right sized GPU and 7x Simultaneous Instances per GPU; New Sparsity Acceleration, harnessing Sparsity in AI Models with 2x AI Performance; 3rd Generation NVLINK and NVSWITCH, delivering Efficient Scaling to Enable Super GPU, and 2X More Bandwidth than the V100 GPU. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers.

 

Combined with the NVIDIA software stack, the A100 GPU accelerates all major deep learning and data analytics frameworks and over 700 HPC applications. NVIDIA NGC, a hub for GPU-optimized software containers for AI and HPC, simplifies application deployments so researchers and developers can focus on building their solutions.

 

Applications:

AI, HPC, VDI, machine intelligence, deep learning, machine learning, artificial intelligence, Neural Network, advanced rendering and compute.

Processor

Dual AMD EPYC™ 7002 or 7003 series processor, 7nm, Socket SP3, up to 64 cores, 128 threads, and 256MB L3 cache TDP up to 200W

Memory

  • 16 x DDR4 DIMM slots
  • 8-Channel memory architecture
  • Up to 4TB RDIMM/LRDIMM DDR4-3200 memory

Graphics Processing Unit (GPU):

  • 4x NVIDIA  A100 PCIe 4.0, A40, A10, A30, A16 PCIe 4.0, T4, Quadro RTX 6000 & 8000 passive PCIe

Expansion Slots

  • 4x FH/FL PCIe Gen 4.0 x16/x8 slots (MUX pairs with adjacent x8 slots)
  • 4x FH/FL PCIe Gen 4.0 x0/x8 slots (MUX pairs with adjacent x16 slots)
  • 2x Low Profile PCIe x16 Gen 4.0 slots

Storage

  • 8x 3.5/2.5″ Hot-swap SSD/HDDs
  • SAS 12Gb/s /SATA 6Gb/s /NVMe HDD Backplane
  • 2x SATA-DOM

Network Controller

  • 2 x 10GBASE-T ports
  • 1 x GbE management LAN

Power Supply

1 + 1 2,200 watt redundant PSUs, 80 PLUS Platinum

System Dimension

3.43″ x 17.25″ x 32.71″ / 87mm x 438.4mm x 831mm (H x W x D)

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.