Skip to main content

Cerebrium GPU Pricing

4 GPU instances across 1 regions.4 GPU models available — from $0.52/hr.

InferencePer-second billing

Serverless ML infrastructure platform offering per-second GPU billing with a Python-first API. Deploy models in minutes without managing infrastructure.

Strengths
  • Per-second billing
  • Python-first API
  • Fast deployment
  • No infrastructure management
Considerations
  • Serverless limitations
  • Cold start latency
Stop to pause billing
Free egress
Quick AnswerUpdated Apr 12, 7:00 AMMethodology

Cerebrium currently lists 4 GPU instances across 4 GPU models and 1 regions. Pricing starts at $0.52/hr, while the median listing price is $2.95/hr. Compare by model, commitment type, and region before treating the cheapest row as the best choice.

Starting at
$0.52/hr
Median
$2.95/hr
Models
4
Spot share
0%
Starting at
$0.52/hr
cheapest instance
GPU Models
4
available
Instances
4
total
Regions
1
covered

All Cerebrium GPU Instances

4 results
GPU ModelInstanceCountVRAMRegionTypePrice/hr$/GPU/hr
RTX 40901x-RTX-4090-cerebrium24GBUS-EastServerless$0.5220Rent
A100 80GB1x-A100-80GB-cerebrium80GBUS-EastServerless$2.1384Rent
H1001x-H100-cerebrium80GBUS-EastServerless$2.9520Rent
H100 SXM8x-H100-SXM-cerebrium640GBUS-EastServerless$23.6160$2.9520Rent

Cerebrium GPU Cloud — FAQ

How much does Cerebrium charge for GPUs?

Cerebrium GPU instances start from $0.52/hr. The average price is $7.31/hr. Prices depend on GPU model, region, and commitment type (on-demand vs spot).

What GPU models does Cerebrium offer?

Cerebrium offers 4 GPU models: RTX 4090, A100 80GB, H100, H100 SXM. Browse the full list above to compare prices per model.

Where can I see billing assumptions and risk methodology?

GPU Tracker’s pricing comparisons are paired with true cost and risk signals. Read the methodology page for how refresh cadence, cost assumptions, and reliability indicators are defined.

We use cookies for analytics and to remember your preferences. Privacy Policy