Live prices from 54 providers
Cheapest Cloud GPU
per hour — cheapest available right now
5,326 instances compared across 54 providers, 77 GPU models, and 136 regions.
Every available instance sorted by price
| # | GPU | Provider | Price | VRAM | Count | Type | |
|---|---|---|---|---|---|---|---|
V100 V100 Older | Vast.ai | 0.0133/hr | 16GB | 1× | Spot | Deploy | |
| 2 | V100 V100 Older | Vast.ai | 0.0133/hr | 16GB | 1× | Spot | Deploy |
| 3 | 3080 RTX3080Ti Ampere | Vast.ai | 0.0267/hr | 12GB | 1× | Spot | Deploy |
| 4 | V100 V100 Older | Vast.ai | 0.0282/hr | 16GB | 1× | On-Demand | Deploy |
| 5 | V100 V100 Older | Vast.ai | 0.0300/hr | 16GB | 2× | Spot | Deploy |
| 6 | V100 V100 Older | Vast.ai | 0.0305/hr | 16GB | 1× | On-Demand | Deploy |
| 7 | 5060 RTX5060Ti Blackwell | Vast.ai | 0.0360/hr | 16GB | 1× | Spot | Deploy |
| 8 | V100 V100 Older | Verda | 0.0483/hr | 16GB | 1× | Spot | Deploy |
| 9 | 5060 RTX5060Ti Blackwell | Vast.ai | 0.0497/hr | 16GB | 1× | On-Demand | Deploy |
| 10 | V100 V100 Older | Vast.ai | 0.0533/hr | 16GB | 4× | Spot | Deploy |
| 11 | 3090 RTX3090 Ampere | Vast.ai | 0.0533/hr | 24GB | 1× | Spot | Deploy |
| 12 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 13 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 14 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 15 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 16 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 17 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 18 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 19 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
| 20 | GPU A16 Older | Vultr | 0.0590/hr | 2GB | 1× | On-Demand | Deploy |
20 of 5326 available instances · Updated continuously
Cheapest by Performance Tier
Best price for each GPU class — live from 54 providers
High Performance
H100 · H200 · A100 · MI300X
Mid-Range
L40S · A40 · L4 · A10G
Budget
T4 · RTX 4090 · RTX 3090 · RTX 3080
How to Find the Cheapest Cloud GPU in 2026
Cloud GPU pricing varies dramatically — the same NVIDIA H100 can cost anywhere from $1.50/hr to $4.50/hr depending on provider, region, and commitment type.
5 strategies to cut GPU costs
- Use spot instances for batch work. 50–80% below on-demand on most providers.
- Right-size your GPU. An A100 80GB is overkill for a 7B model. Match VRAM to your workload.
- Compare across providers. Hyperscalers charge 2–3× more than GPU-native clouds for the same hardware.
- Use reserved pricing. 30–60% off with 1-year or 3-year commitments.
- Factor in egress. Hyperscalers charge $0.08–$0.12/GB for data transfer out.
Self-hosting vs API
Self-hosted GPUs are 5–50× cheaper per token than API providers. A single H100 running Llama 3 70B costs ~$0.05/1M tokens, versus $2.50/1M for GPT-4o. Use our LLM Cost Calculator to compare for your workload.
Frequently Asked Questions
Get notified when prices drop
Set a price threshold for any GPU model and we'll email you when it's available.
Set up price alertsFree — no signup required