Skip to main content
trendspricinganalysis

GPU Cloud Prices Dropped 40% in 12 Months — Here's What Happened

H100 prices fell from $3.50/hr to $1.87/hr in 12 months. We track 5,025 GPU instances daily. Here's what's driving the drop.

February 15, 20256 min read

GPU cloud pricing is in freefall, and if you locked into a 12-month reserved contract in early 2025, you're probably overpaying by 40% or more right now. The H100 — the GPU that defined the AI boom — was commanding $3.50+/hr on-demand just twelve months ago. Today, the cheapest on-demand H100 is $1.87/hr. The A100 80GB has collapsed even further: from $2+/hr in 2024 to $0.34/hr today. These aren't small corrections — this is a structural repricing of the entire GPU cloud market.

What's Driving the Price Drop

Three forces are converging to push GPU cloud prices down, and none of them show signs of reversing.

1. Blackwell Is Here, and It's Killing H100 Pricing

NVIDIA's Blackwell architecture — the B200 and B300 GPUs — started shipping to cloud providers in late 2025. The B200 packs 180GB of HBM3e memory and roughly 2.5x the training performance of an H100. When the next-generation GPU offers that much of an improvement, the current generation suddenly looks a lot less attractive. Providers that invested heavily in H100 fleets are being forced to cut prices to maintain utilization. Nobody wants to pay H100 prices when B200s are available, so H100s are becoming the "last generation's hardware" — still extremely capable, but no longer premium-priced.

2. The Market Doubled in Size

Two years ago, the GPU cloud market was dominated by about 10 providers: the three hyperscalers (AWS, GCP, Azure), a few established independents (Lambda, CoreWeave), and marketplace platforms like Vast.ai. Today, we track a much broader set of providers and marketplaces. New entrants like Hyperstack, CloudRift, Verda, and Crusoe all entered the market with aggressive pricing to capture share.

More competition means lower prices. It's that simple. Hyperstack launched with RTX A4000 instances at $0.15/hr on-demand — a price point that would have been unthinkable from any established provider. CloudRift entered at $0.39/hr for RTX 4090 on-demand. These newcomers don't have the legacy cost structures of the hyperscalers, so they can price aggressively and still be profitable.

3. The Spot Market Matured

Spot GPU instances used to be a niche offering from a handful of providers. Today, 2,131 spot instances are available across our tracked providers — that's 42% of the total 5,025 instances we monitor. The growing spot market creates downward pressure on on-demand pricing because customers have a credible alternative. Why pay $1.87/hr for an on-demand H100 when you can get a spot H100 for $0.73/hr? Providers are being forced to narrow the gap or lose customers to spot.

The Price Trajectory: Where We've Been and Where We're Going

GPUEarly 2025Feb 2025Change
H100 (cheapest on-demand)$3.50+/hr$1.87/hr-47%
A100 80GB (cheapest on-demand)$2.00+/hr$0.34/hr-83%

The A100's decline is the more dramatic story. An 83% price drop in roughly 18 months is extraordinary for enterprise hardware. The A100 is now priced like commodity compute — and for many workloads, that's exactly what it has become. It's a two-generation-old GPU that is still very capable but no longer cutting-edge, and the market is pricing it accordingly.

You can track these pricing trends in real time on our trends page, which updates daily with data from the providers we track.

What's Coming Next

The B200, with its 180GB of HBM3e VRAM, is beginning to appear in cloud provider catalogs. As supply ramps through 2025, expect the H100 to drop below $1.50/hr for on-demand and potentially below $0.50/hr for spot. The A100 will continue its descent — we wouldn't be surprised to see A100 80GB instances consistently below $0.20/hr by Q4 2025.

The H200, currently at $1.84/hr, sits in an interesting position. It's newer than the H100 and has significant inference advantages (141GB VRAM, 4.8 TB/s bandwidth), but it's also a half-step before Blackwell. Expect H200 prices to hold steady or drop slightly through 2025, then fall off a cliff in 2027 when B200 supply is fully ramped.

Our advice: do not sign 1-year reserved contracts right now. Prices are falling too fast. A 1-year commitment at today's prices will look terrible in 6 months when the same GPU is 20–30% cheaper. Use on-demand or short-term spot instead, and re-evaluate quarterly. The only exception is if you're getting a truly exceptional reserved discount (60%+ off current on-demand) with a provider you trust.

What This Means for You

If you're an AI startup, an ML researcher, or an engineer running GPU workloads, the falling prices are unambiguously great news. The cost of training and serving models is dropping faster than the cost of the models themselves is growing. A model that cost $100,000 to train in 2024 might cost $30,000 today on the same hardware class, simply because the per-hour GPU price has collapsed.

This also means the barrier to entry for AI continues to fall. You no longer need a six-figure cloud budget to train serious models. An A100 80GB at $0.34/hr means you can fine-tune a 30B model for under $20. An H100 spot instance at $0.73/hr means pre-training experiments that used to cost thousands now cost hundreds.

Stay flexible. Don't lock in. And keep checking our comparison tool — the prices you see today won't be the prices you see next month.

Stay ahead on GPU pricing

Get weekly GPU price reports, new hardware analysis, and cost optimization tips. Join engineers and researchers who save thousands on cloud compute.

No spam. Unsubscribe anytime. We respect your inbox.

Find the cheapest GPU for your workload

Compare real-time prices across tracked cloud providers and marketplaces with 5,000+ instances. Updated every 6 hours.

Compare GPU Prices →

Related Articles

We use cookies for analytics and to remember your preferences. Privacy Policy