Skip to main content

Live prices from 54 providers

Cheapest Cloud GPU

0.0133

per hour — cheapest available right now

5,326 instances compared across 54 providers, 77 GPU models, and 136 regions.

Every available instance sorted by price

#GPUProviderPriceVRAMCountType
V100
V100
Older
Vast.ai0.0133/hr16GBSpotDeploy
2
V100
V100
Older
Vast.ai0.0133/hr16GBSpotDeploy
3
3080
RTX3080Ti
Ampere
Vast.ai0.0267/hr12GBSpotDeploy
4
V100
V100
Older
Vast.ai0.0282/hr16GBOn-DemandDeploy
5
V100
V100
Older
Vast.ai0.0300/hr16GBSpotDeploy
6
V100
V100
Older
Vast.ai0.0305/hr16GBOn-DemandDeploy
7
5060
RTX5060Ti
Blackwell
Vast.ai0.0360/hr16GBSpotDeploy
8
V100
V100
Older
Verda0.0483/hr16GBSpotDeploy
9
5060
RTX5060Ti
Blackwell
Vast.ai0.0497/hr16GBOn-DemandDeploy
10
V100
V100
Older
Vast.ai0.0533/hr16GBSpotDeploy
11
3090
RTX3090
Ampere
Vast.ai0.0533/hr24GBSpotDeploy
12
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
13
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
14
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
15
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
16
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
17
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
18
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
19
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy
20
GPU
A16
Older
Vultr0.0590/hr2GBOn-DemandDeploy

20 of 5326 available instances · Updated continuously

Cheapest by Performance Tier

Best price for each GPU class — live from 54 providers

High Performance

H100 · H200 · A100 · MI300X

A100
A100
Vast.ai
0.0800/hr
40 GB · 938 instances available

Mid-Range

L40S · A40 · L4 · A10G

A40
A40
Vultr
0.0750/hr
2 GB · 831 instances available

Budget

T4 · RTX 4090 · RTX 3090 · RTX 3080

T4
T4
Azure
0.0684/hr
16 GB · 1,557 instances available

How to Find the Cheapest Cloud GPU in 2026

Cloud GPU pricing varies dramatically — the same NVIDIA H100 can cost anywhere from $1.50/hr to $4.50/hr depending on provider, region, and commitment type.

5 strategies to cut GPU costs

  1. Use spot instances for batch work. 50–80% below on-demand on most providers.
  2. Right-size your GPU. An A100 80GB is overkill for a 7B model. Match VRAM to your workload.
  3. Compare across providers. Hyperscalers charge 2–3× more than GPU-native clouds for the same hardware.
  4. Use reserved pricing. 30–60% off with 1-year or 3-year commitments.
  5. Factor in egress. Hyperscalers charge $0.08–$0.12/GB for data transfer out.

Self-hosting vs API

Self-hosted GPUs are 5–50× cheaper per token than API providers. A single H100 running Llama 3 70B costs ~$0.05/1M tokens, versus $2.50/1M for GPT-4o. Use our LLM Cost Calculator to compare for your workload.

LLM Cost Calculator
Self-hosted GPU cost vs API pricing per million tokens
Buy vs Rent Calculator
Should you buy hardware or rent cloud GPUs?
GPU Pricing 2026
Full market overview with provider breakdowns

Frequently Asked Questions

Get notified when prices drop

Set a price threshold for any GPU model and we'll email you when it's available.

Set up price alerts

Free — no signup required

We use cookies for analytics and to remember your preferences. Privacy Policy