High-performance GPU instances powered by NVIDIA A100 and H100. Purpose-built for AI/ML training, deep learning, rendering, and scientific computing.
Monthly pricing shown. Hourly billing available at ~1.5x monthly rate.
Ideal for AI training and inference workloads
| Instance | vCPU | RAM | Storage | Bandwidth | Price/mo | |
|---|---|---|---|---|---|---|
| gpu.a100.2x | 24 | 240 GB | 1 TB NVMe | - | ₹150/hr | |
| gpu.a100.4x | 48 | 480 GB | 2 TB NVMe | - | ₹295/hr | |
| gpu.a100.8x | 96 | 960 GB | 4 TB NVMe | - | ₹580/hr | |
| gpu.a100.1x | 12 | 120 GB | 500 GB NVMe | - | ₹755/hr |
Extended memory for large model training
| Instance | vCPU | RAM | Storage | Bandwidth | Price/mo | |
|---|---|---|---|---|---|---|
| gpu.a100-80.1x | 12 | 120 GB | 500 GB NVMe | - | ₹95/hr | |
| gpu.a100-80.2x | 24 | 240 GB | 1 TB NVMe | - | ₹185/hr | |
| gpu.a100-80.4x | 48 | 480 GB | 2 TB NVMe | - | ₹365/hr | |
| gpu.a100-80.8x | 96 | 960 GB | 4 TB NVMe | - | ₹720/hr |
Latest generation for cutting-edge AI workloads
| Instance | vCPU | RAM | Storage | Bandwidth | Price/mo | |
|---|---|---|---|---|---|---|
| gpu.h100.1x | 16 | 200 GB | 1 TB NVMe | - | ₹165/hr | |
| gpu.h100.2x | 32 | 400 GB | 2 TB NVMe | - | ₹325/hr | |
| gpu.h100.4x | 64 | 800 GB | 4 TB NVMe | - | ₹640/hr | |
| gpu.h100.8x | 128 | 1.6 TB | 8 TB NVMe | - | ₹1,250/hr |
Cost-effective for inference and light training
| Instance | vCPU | RAM | Storage | Bandwidth | Price/mo | |
|---|---|---|---|---|---|---|
| gpu.t4.1x | 4 | 16 GB | 200 GB NVMe | - | ₹25/hr | |
| gpu.t4.2x | 8 | 32 GB | 400 GB NVMe | - | ₹48/hr | |
| gpu.t4.4x | 16 | 64 GB | 800 GB NVMe | - | ₹92/hr |
High-bandwidth GPU-to-GPU communication for distributed training.
Ready-to-use with CUDA, cuDNN, and popular ML frameworks.
One-click Jupyter notebook access for interactive development.
Save up to 70% with interruptible GPU instances for training.
Train large language models with multi-GPU support
Image classification, object detection, segmentation
Real-time rendering and ray tracing workloads
Molecular dynamics, weather simulation, HPC
Transcoding, AI upscaling, real-time processing
Deploy ML models for production inference