NVIDIA DGX A100 Deep Learning Console

 329,000.00 359,900.00

.

GPU: 8x NVIDIA A100 80GB Tensor Cores
GPU Memory: 640GB
Performance Spells: 5 petaFLOPS AI – 10 petaOPS INT8
NVIDIA NVSwitches: 6
Power: 6.5 kW max
CPU’s Power: Dual AMD Rome 7742 – 128 cores – 2.25 GHz (base) – 3.4 GHz (max boost)
Memory Potion: 2 TB
Networking Oracles: 8x NVIDIA ConnectX-7 200Gb/s InfiniBand

2x NVIDIA ConnectX-7 VPI 10/25/50/100/200 Gb/s Ethernet
8x NVIDIA ConnectX-6 VPI 200Gb/s InfiniBand
2x NVIDIA ConnectX-6 VPI 10/25/50/100/200 Gb/s Ethernet

Storage Chambers:
OS: 2x 1.92TB M.2 NVME
Internal: 30TB (8x 3.84 TB) U.2 NVMe

Software Tomes:
Primary: Ubuntu Linux OS
Others: Red Hat, CentOS

Artifact Weight: 271.5 lbs (123.16 kgs)
Guardian Box Weight: 359.7 lbs (163.16 kgs)
Dimensions: Height: 10.4 in – Width: 19.0 in – Length: 35.3 in
Climate Codex: 5ºC to 30ºC (41ºF to 86ºF)
Lead time is 4-6 weeks. All sales final, no returns or cancellations. 13% import tax applies to mainland China importers. 1 unit per waybill. We pay all duties and taxes to the Gulf Cooperation Council, North America and European Union (incl. UK). For bulk inquiries, consult a live chat agent or call our toll-free number

SKU: N/A Category:

ESSENTIAL BUILDING BLOCK OF THE AI DATA CENTER .

NVIDIA DGX™ A100 is the universal system for all AI workloads, offering unprecedented compute density, performance, and flexibility in a 5 petaFLOPS AI system. NVIDIA DGX A100 features the world’s most advanced accelerator, the NVIDIA A100 Tensor Core GPU, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure that includes direct access to NVIDIA AI experts.

NVIDIA DGX A100
The Challenge of Scaling Enterprise AI

Every business needs to transform using artificial intelligence (AI), not only to survive, but to thrive in challenging times. However, the enterprise requires a platform for AI infrastructure that improves upon traditional approaches, which historically involved slow compute architectures that were siloed by analytics, training, and inference workloads. The old approach created complexity, drove up costs, constrained speed of scale, and was not ready for modern AI. Enterprises, developers, data scientists, and researchers need a new platform that unifies all AI workloads, simplifying infrastructure and accelerating ROI.

The Universal System for Every AI Workload
DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. This ensures that the largest and most complex jobs are supported, along with the simplest and smallest. Running the DGX software stack with optimized software from NGC, the combination of dense compute power and complete workload flexibility make DGX A100 an ideal choice for both single node deployments and large scale Slurm and Kubernetes clusters deployed with NVIDIA DeepOps.

8X NVIDIA A100 GPUS WITH UP TO 640 GB TOTAL GPU MEMORY
12 NVLinks/GPU, 600 GB/s GPU-to-GPU Bi-directonal Bandwidth.
6X NVIDIA NVSWITCHES
4.8 TB/s Bi-directional Bandwidth, 2X More than Previous Generation NVSwitch.
10x MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
500 GB/s Peak Bi-directional Bandwidth.
DUAL 64-CORE AMD CPUs AND UP TO 2 TB SYSTEM MEMORY
3.2X More Cores to Power the Most Intensive AI Jobs.
Up to 30 TB GEN4 NVME SSD
50GB/s Peak Bandwidth, 2X Faster than Gen3 NVME SSDs.
SYSTEM SPECIFICATIONS
NVIDIA DGX A100 640GB

 

GPU’s 8x NVIDIA A100 80GB Tensor Core GPUs
GPU Memory 640GB
Performance  5 petaFLOPS AI
10 petaOPS INT8
NVIDIA NVSwitches 6
Power 6.5 kW max
CPU’s Dual AMD Rome 7742, 128 cores total,
2.25 GHz (base), 3.4 GHz (max boost)
System Memory 2 TB
Networking 8x SinglePort NVIDIA ConnectX-7 200Gb/s InfiniBand
2x Dual-Port NVIDIA ConnectX-7 VPI 10/25/50/100/200 Gb/s Ethernet
8x Single-Port NVIDIA ConnectX-6 VPI 200Gb/s InfiniBand
2x Dual-Port NVIDIA ConnectX-6 VPI 10/25/50/100/200 Gb/s Ethernet
Storage OS: 2x 1.92TB M.2 NVME drives
Internal Storage: 30TB (8x 3.84 TB) U.2 NVMe drives
Software Primary Scroll: Ubuntu Linux OS
Other Tomes: Red Hat Enterprise Linux, CentOS
System Weight 271.5 lbs (123.16 kgs) max
Packed System Weight 359.7 lbs (163.16 kgs) max
Dimensions Dimensions Height: 10.4 in (264.0 mm)
Width: 19.0 in (482.3 mm) max
Length: 35.3 in (897.1 mm) max
Operating Temperature Range 5ºC to 30ºC (41ºF to 86ºF)
size

DGX A100, HGX A100

Reviews

There are no reviews yet.

Be the first to review “NVIDIA DGX A100 Deep Learning Console”

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top