12129 

Learn More


NVIDIA A100 Enterprise 80GB



Specifications:



CUDA Cores: 6912

Streaming Multiprocessors: 108

Tensor Cores | Gen 3: 432

GPU Memory: 40 GB HBM2e ECC on by Default

Memory Interface: 5120-bit

Memory Bandwidth: 1555 GB/s

NVLink: 2-Way, 2-Slot, 600 GB/s Bidirectional

MIG (Multi-Instance GPU) Support: Yes, up to 7 GPU Instances

FP64: 9.7 TFLOPS

FP64 Tensor Core: 19.5 TFLOPS

FP32: 19.5 TFLOPS

TF32 Tensor Core: 156 TFLOPS | 312 TFLOPS*

BFLOAT16 Tensor Core: 312 TFLOPS | 624 TFLOPS*

FP16 Tensor Core: 312 TFLOPS | 624 TFLOPS*

INT8 Tensor Core: 624 TOPS | 1248 TOPS*

Thermal Solutions: Passive

vGPU Support: NVIDIA Virtual Compute Server (vCS)

System Interface: PCIE 4.0 x16






Description:



The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20x higher performance over the prior NVIDIA Volta generation. A100 can efficiently scale up or be partitioned into seven isolated GPU instances, with Multi-Instance GPU (MIG) providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands.