Skip to main content
pricinganalysisguide

GPU Cloud Pricing Statistics 2026: Data From 4,969 Instances Across 18 Providers

Every GPU cloud pricing stat that matters in 2026: market medians, spot vs on-demand savings, provider price ranges, and per-GPU model breakdowns — sourced from 4,969 live instances updated every 6 hours.

March 19, 202612 min read

GPU cloud pricing is chaotic. Prices differ by 10x across providers for the same GPU, spot markets can collapse overnight, and hyperscalers charge multiples of what specialized providers do. To cut through the noise, we pulled live data from 4,969 GPU instances across 18 providers — updated every 6 hours — and compiled every stat that actually matters for someone renting compute in 2026.

All numbers below reflect current market rates as of March 2026. Use the live comparison tool to see exact prices in real time.

Market Overview: Key Numbers

4,969
Total Instances Tracked
18
Cloud Providers
65+
GPU Models
62%
Spot Savings vs On-Demand

Spot vs On-Demand: The 62% Gap

Of the 4,969 instances tracked, 2,095 are spot instances (42%) and 2,874 are on-demand (58%). The average spot price is $2.83/hr vs $7.39/hr for on-demand — a 62% saving. This gap is significantly wider than AWS or GCP spot discounts (typically 60-90% below on-demand for EC2, but the base price is already higher).

Spot instances on GPU clouds like Vast.ai and RunPod are interruptible — the host can reclaim the GPU with limited notice. For fault-tolerant training jobs with checkpointing, this is the most cost-effective option in the market. For production inference, stick to on-demand.

TypeInstancesAvg PriceBest Use Case
Spot / Interruptible2,095$2.83/hrTraining, batch jobs, experimentation
On-Demand / Reserved2,874$7.39/hrProduction inference, SLA-required workloads

Price by GPU Model: Complete Market Data

The table below shows single-GPU pricing only (multi-GPU instances excluded to avoid distorting medians). Prices are current as of March 2026.

GPUListingsSpot FromOn-Demand FromMedian
NVIDIA B200180GB23$1.67/hr$3.60/hr$4.99/hr
NVIDIA H200141GB46$0.33/hr$1.99/hr$2.29/hr
NVIDIA H10080GB70$0.80/hr$1.58/hr$2.59/hr
NVIDIA A10040–80GB103$0.08/hr$0.09/hr$1.29/hr
NVIDIA L40S48GB81$0.26/hr$0.86/hr$1.86/hr
NVIDIA RTX 509032GB34$0.13/hr$0.33/hr$0.65/hr
NVIDIA RTX 409024GB34$0.17/hr$0.34/hr$0.34/hr
NVIDIA RTX 309024GB22$0.05/hr$0.22/hr$0.22/hr
NVIDIA A1024GB372$0.08/hr$0.09/hr$1.20/hr
NVIDIA T416GB543$0.07/hr$0.34/hr$0.54/hr

Provider Landscape: 18 Providers, Wildly Different Prices

The 18 providers we track range from hyperscalers (AWS, GCP, Azure) to specialized GPU clouds (Vast.ai, RunPod, Lambda Labs) to newer entrants (Verda, CloudRift). The price gap between the cheapest and most expensive provider for the same GPU routinely exceeds 10x.

ProviderInstancesCheapest GPUCategory
Vast.ai128$0.01/hrMarketplace
Verda86$0.05/hrSpecialized
RunPod881$0.07/hrSpecialized
Azure448$0.07/hrHyperscaler
Vultr115$0.06/hrCloud
GCP2026$0.12/hrHyperscaler
Hyperstack9$0.15/hrSpecialized
Cudo Compute6$0.24/hrSpecialized
Crusoe11$0.40/hrSpecialized
CloudRift43$0.39/hrSpecialized
Lambda Labs207$0.50/hrSpecialized
OCI320$0.64/hrHyperscaler
AWS619$0.10/hrHyperscaler
Nebius32$1.25/hrCloud
CoreWeave8$6.50/hrSpecialized

The Hyperscaler Tax: How Much More AWS/GCP/Azure Charge

The H100 is the clearest example of hyperscaler pricing vs. the market. AWS's cheapest single H100 is $1.16/hr — but that's because they run competitive spot pricing. Verda offers H100s from $0.80/hr. For on-demand H100, AWS on-demand pricing sits above $4/hr for the p4d/p5 instances. Lambda Labs charges $2.49/hr. Scaleway charges $3.22/hr. The spread is massive.

This pattern holds across GPU models. The hyperscaler premium is primarily a function of:

  • Enterprise SLAs: Uptime guarantees, compliance certifications (SOC 2, HIPAA, PCI), and dedicated support cost money.
  • Ecosystem lock-in pricing: AWS/GCP/Azure price GPU instances knowing you're already paying for their storage, networking, and managed services.
  • Market inertia: Enterprise procurement teams default to known vendors. The alternative providers compete on price to win new customers.

Price Ranges by Workload Type

Different workloads have different GPU requirements, which translates to very different price ranges. Here's the practical breakdown:

WorkloadRecommended GPUPrice Range
7B model inference (quantized)RTX 3090 / T4$0.05–$0.34/hr
7B–13B model inference (fp16)RTX 4090 / RTX 5090$0.13–$0.59/hr
13B–34B model inference / fine-tuningL40S / A100 40GB$0.26–$2.38/hr
70B model inference / fine-tuningA100 80GB / H100$0.08–$2.99/hr
Large-scale pre-trainingH100 / H200 / B200$0.80–$5.29/hr
Stable Diffusion / image generationRTX 3090 / RTX 4090$0.05–$0.59/hr

Key Takeaways

  • Spot instances save 62% on average vs on-demand across the full market. For training jobs with checkpointing, spot is almost always the right choice.
  • The RTX 4090 is one of the most cost-effective GPUs in the market at $0.17–$0.34/hr per GPU, handling 7B–13B inference at a fraction of A100 pricing.
  • The A100 remains the value king for training. At $0.08/hr spot, it offers more VRAM-per-dollar than almost anything on the market for training workloads that fit in 80GB.
  • H200 spot ($0.33/hr) is dramatically underpriced given its 141GB VRAM and 4.8 TB/s bandwidth. Check availability before defaulting to H100.
  • Hyperscalers charge 3–10x more than specialized providers for the same GPU. The premium is justified only if you need enterprise SLAs or deep integration with their managed services.

All data is sourced from GPU Tracker's live price feed, updated every 6 hours. See current prices across all providers and GPU models at gputracker.dev.

Stay ahead on GPU pricing

Get weekly GPU price reports, new hardware analysis, and cost optimization tips. Join engineers and researchers who save thousands on cloud compute.

No spam. Unsubscribe anytime. We respect your inbox.

Find the cheapest GPU for your workload

Compare real-time prices across tracked cloud providers and marketplaces with 5,000+ instances. Updated every 6 hours.

Compare GPU Prices →

Related Articles