Enterprise-grade GPUs for AI training, inference, and HPC workloads
Next-generation Hopper GPU with unprecedented memory capacity for LLM training.
Industry-leading GPU for AI training, fine-tuning, and high-performance inference.
Battle-tested GPU powering the world's largest AI models.
Optimized for AI inference, visualization, and virtual workstations.
Mainstream data center GPU for mixed AI and HPC workloads.
Universal data center GPU combining AI inference with graphics capabilities.
NVIDIA RTX for visualization, rendering, and AI development
Professional visualization GPU with real-time ray tracing.
Maximum memory for complex 3D models and large datasets.
Ampere-based professional GPU for AI, rendering, and visualization.
Ada Lovelace architecture for next-gen professional workloads.
Next-generation Blackwell professional GPU for demanding workloads.
Compare our most popular data center GPUs side by side.
| Feature | H200 | H100 | A100 | L40S |
|---|---|---|---|---|
| Architecture | Hopper | Hopper | Ampere | Ada Lovelace |
| VRAM | 141 GB | 80 GB | 80 GB | 48 GB |
| Memory Type | HBM3e | HBM3 | HBM2e | GDDR6 |
| Bandwidth | 4.8 TB/s | 3.35 TB/s | 2.0 TB/s | 864 GB/s |
| FP16 Performance | 1,979 TFLOPS | 1,979 TFLOPS | 312 TFLOPS | 362 TFLOPS |
| Best For | LLM Training | AI Training | ML Workloads | Inference + Graphics |