Access spare compute capacity at up to 90% discount. Perfect for fault-tolerant workloads, batch processing, CI/CD pipelines, and ML training with checkpoints.
Compare spot prices with on-demand rates
| Instance Type | On-Demand | Spot Price | Savings | |
|---|---|---|---|---|
| gp.small | ₹350 | ₹52/mo | 85% | |
| gp.medium | ₹700 | ₹105/mo | 85% | |
| gp.large | ₹1,400 | ₹210/mo | 85% | |
| gp.xlarge | ₹2,800 | ₹420/mo | 85% | |
| gp.2xlarge | ₹5,600 | ₹840/mo | 85% | |
| co.large | ₹1,200 | ₹180/mo | 85% | |
| co.xlarge | ₹2,400 | ₹360/mo | 85% | |
| mo.xlarge | ₹3,600 | ₹540/mo | 85% | |
| gpu.t4.1x | ₹25/hr | ₹7.5/hr | 70% | |
| gpu.a100.1x | ₹75/hr | ₹22.5/hr | 70% |
When capacity is available and spot price is below your max, instance starts.
Use the instance just like any on-demand instance.
Get 2-minute warning via metadata service. Save state and gracefully terminate.
Specify your instance type and maximum price you're willing to pay.
Handle interruptions gracefully with metadata service and webhooks.
Access spare compute capacity at a fraction of on-demand prices.
Get notified 2 minutes before termination to save your work.
Identical hardware and performance as on-demand instances.
Process large datasets with parallelizable workloads that can checkpoint progress
Run build and test jobs that can be retried on interruption
Apache Spark, Hadoop, and other distributed computing frameworks
Train models with checkpointing for fault tolerance
Transcode video files in parallel chunks that can be reprocessed
Stateless crawling jobs that can resume from where they left off