The NVIDIA H100 remains the most-rented GPU for AI training and inference in 2026. After two years of volatile pricing, rates have settled into a predictable range. But where you rent still matters more than what you rent.
We track H100 prices across 50+ cloud providers in real time, updated every 6 hours. Here is where the market stands as of March 2026 — and where it is headed.
Where H100 Prices Sit Today
H100 on-demand rates across 50+ cloud providers now cluster between $2.49/hr and $3.50/hr on specialized GPU clouds. That range has held steady since early 2026 after dropping 60-75% from peak 2024 levels.
The floor sits around $1.49/hr on marketplace providers like Vast.ai, where individual hosts set their own rates. The ceiling — on hyperscalers — still reaches $12.30/hr on Azure and $10.60/hr on Google Cloud for equivalent H100 SXM hardware.
| Provider Tier | Price Range | Examples |
|---|---|---|
| GPU Marketplaces | $1.49 – $2.49 | Vast.ai, TensorDock |
| Specialized GPU Clouds | $2.49 – $3.50 | RunPod, Lambda Labs, CoreWeave |
| Mid-Tier Clouds | $3.00 – $5.00 | Paperspace, Jarvis Labs |
| Hyperscalers | $6.98 – $12.30 | AWS, Azure, GCP |
Use the live comparison tool to see exact H100 prices from every provider, updated every 6 hours.
The Hyperscaler Markup Is Still Massive
The same NVIDIA H100 80GB SXM GPU costs $2.49/hr on RunPod and $12.30/hr on Azure. That is a 5x difference for identical silicon.
This gap has persisted for over a year. Hyperscalers justify it with managed services, compliance certifications, SLAs, and integration with their broader cloud ecosystems. For teams that need those things, the premium may be worth it. For teams running training jobs or batch inference, it rarely is.
The pricing spread between the cheapest and most expensive H100 offering is roughly 94% — the widest gap of any GPU currently tracked.
What That Looks Like Per Month
Same hardware. Same CUDA version. $56,000/month difference.
A100 Pricing Is Approaching Commodity Levels
The A100 80GB, still a capable training and inference GPU, has fallen further:
- Specialized providers: $1.29 – $2.29/hr
- Marketplace/spot: under $1.00/hr
- Hyperscalers: $2.50 – $4.00/hr
Analysts expect A100 rates to dip below $1/hr universally by mid-2026 as more H100 and Blackwell supply comes online. Multi-year reserved contracts from 2023-2024 are expiring, returning hardware to the open market.
For a full breakdown, see our A100 pricing page or compare A100 vs H100 side by side.
Spot Instances: 40-65% Savings If You Can Tolerate Interruption
Spot and preemptible pricing cuts another 40-65% off on-demand rates. February 2026 spot data from two major providers:
| GPU | RunPod (Spot) | Vast.ai (Spot) |
|---|---|---|
| A100 PCIe 40GB | $0.60/hr | $0.52/hr |
| A100 SXM 80GB | $0.79/hr | $0.67/hr |
| L40 40GB | $0.69/hr | $0.31/hr |
Spot instances can be reclaimed with limited notice. For fault-tolerant training jobs with checkpointing, this is the most cost-effective option available. For production inference, on-demand remains the safer choice.
What Pushes Prices Lower From Here
Three forces are working to bring H100 prices down further in 2026:
1. Blackwell B200 Supply Ramping
As B200 instances become widely available, H100s shift from "current-gen" to "previous-gen" — the same cycle that made A100s cheap. Multiple providers are already listing B200 instances.
2. Reserved Capacity Expiring
Organizations that locked in 2-3 year H100 reservations in 2023-2024 will see those contracts expire through 2026. That hardware returns to the spot and on-demand market, increasing supply.
3. Rubin on the Horizon
NVIDIA's Rubin architecture entered full production in early 2026. Cloud instances from AWS, Google Cloud, CoreWeave, Lambda, Nebius, and Nscale are expected in H2 2026. Initial pricing is projected at $6-10+/GPU-hr, but NVIDIA claims 10x lower cost-per-token than Blackwell for MoE workloads — which will further pressure H100 economics.
The consensus: H100 on-demand pricing could reach the sub-$2/hr range universally by late 2026.
Why This Matters
If you are renting GPUs for AI workloads today, the single biggest cost lever is not which GPU you pick — it is which provider you pick. A team running 8x H100s pays $479/day on RunPod or $2,362/day on Azure. Over a month, that is a $56,000 difference for the same hardware.
The market has matured enough that specialized GPU clouds offer reliable uptime and tooling. The "safety premium" of hyperscalers is harder to justify for pure compute workloads than it was a year ago.
With Blackwell ramping and Rubin approaching, H100 prices will only go in one direction. The question is not whether to evaluate alternatives to your current provider — it is when.
Compare all H100 prices live on GPUTracker — updated every 6 hours across 50+ providers.
Sources
- GPU Cloud Pricing: H100 Costs $2.49 or $12.30 in 2026 — byteiota
- AI GPU Rental Market Trends (March 2026) — ThunderCompute
- GPU Cloud Pricing Comparison 2026 — Spheron
- NVIDIA H100 Price Guide 2026 — Jarvis Labs
- H100 Rental Prices Compared — IntuitionLabs
- NVIDIA Kicks Off Next Generation of AI With Rubin — NVIDIA Newsroom
- Cast AI: GPU Pricing Foundational Shift in 2026 — Cast AI
