The NVIDIA H100 is the most sought-after GPU in the cloud. It's also the most price-volatile: GPU Tracker currently tracks 70 single-H100 instances across 13 providers, with prices ranging from $0.80/hr to $11.56/hr. That's a 14x gap for the same chip.
Below is a complete, current ranking of every provider offering single H100 GPUs — no multi-GPU instances, no inflated list prices, just the cheapest available entry point per provider as of March 2026.
All 13 H100 Providers Ranked by Price
| Rank | Provider | Cheapest H100 | Type | Notes |
|---|---|---|---|---|
| #1 | Verda | $0.80/hr | Spot | Cheapest H100 in market |
| #2 | AWS | $1.16/hr | Spot | p5 spot instances |
| #3 | RunPod | $1.25/hr | Spot | Community cloud |
| #4 | Nebius | $1.25/hr | On-demand | EU-based, GDPR |
| #5 | Vast.ai | $1.47/hr | Spot | Marketplace, variable |
| #6 | GCP | $1.54/hr | Spot | A3 spot instances |
| #7 | Crusoe | $1.60/hr | On-demand | Carbon-neutral compute |
| #8 | Cudo Compute | $1.87/hr | On-demand | Distributed compute |
| #9 | Hyperstack | $1.90/hr | On-demand | UK-based |
| #10 | Lambda Labs | $2.49/hr | On-demand | Popular for ML teams |
| #11 | OVHcloud | $2.99/hr | On-demand | European provider |
| #12 | Scaleway | $3.22/hr | On-demand | EU, H100 SXM |
| #13 | Latitude.sh | $7.97/hr | On-demand | Bare metal |
Data sourced from GPU Tracker's live feed, updated every 6 hours. Spot prices fluctuate. Check real-time H100 prices before renting.
Spot vs On-Demand H100 Prices
The H100 spot market has genuine deals. Spot H100 instances start at $0.80/hr with a median of $1.75/hr. On-demand H100 starts at $1.58/hr with a median of $2.99/hr. The spot-to-on-demand discount for H100s averages around 40%.
Important: H100 spot instances on decentralized platforms (Vast.ai, RunPod Community) can be interrupted with 30-second to 5-minute notice. Always use checkpoint saving if running training jobs on spot H100s. For inference APIs, use on-demand or reserved instances only.
H100 vs H200: Should You Upgrade?
The H200 is now available from 7 providers at prices starting at $0.33/hr spot and $1.99/hr on-demand. Given that the H200 offers 141GB HBM3e (vs H100's 80GB) and 4.8 TB/s bandwidth (vs 3.35 TB/s), the H200 spot price is a compelling alternative to on-demand H100 for large-model inference.
| Spec | H100 SXM5 | H200 SXM5 |
|---|---|---|
| VRAM | 80GB HBM3 | 141GB HBM3e |
| Memory Bandwidth | 3.35 TB/s | 4.8 TB/s |
| FP16 TFLOPS | 989 | 989 (same die) |
| Spot Price (from) | $0.80/hr | $0.33/hr |
| On-Demand Price (from) | $1.58/hr | $1.99/hr |
| Median Market Price | $2.59/hr | $2.29/hr |
The H200's median market price ($2.29/hr) is actually lower than the H100's median ($2.59/hr), while offering 76% more VRAM. This is the result of H100 demand still being higher — more providers stock it, and buyers have more negotiating power on H200 because supply is less constrained. Check the H200 price comparison before defaulting to H100.
When to Choose Each Provider
- Verda ($0.80/hr): Best pure spot price if available. Check availability first — supply is limited.
- RunPod ($1.25/hr spot): Good mix of price and UX. Community Cloud is spot; Secure Cloud is on-demand at $1.99+/hr. Well-suited for ML teams.
- Nebius ($1.25/hr on-demand): Strong choice for EU-based teams needing GDPR compliance without paying Lambda or AWS prices.
- Lambda Labs ($2.49/hr on-demand): Reliable, developer-friendly, no spot interruptions. Worth the premium for teams that need consistency.
- AWS ($1.16/hr spot): Best if you're already in the AWS ecosystem and can tolerate spot interruptions on p5 instances.
- Latitude.sh ($7.97/hr): Bare metal H100 — justified for workloads that need dedicated hardware and zero noisy-neighbor interference.
All H100 prices update every 6 hours. See current availability and filter by region, commitment type, and VRAM at the H100 price comparison page.