Join Our Community
Get the earliest access to hand-picked content weekly for free.
Spam-free guaranteed! Only insights.

Accelerate your AI projects with affordable GPU cloud rentals starting at $0.2/hour. Access powerful developer tools for ai development and training ai algorithms.
22.3K
Similarweb

Platforms
Pricing Model
Pricing Plans
H200 Community Cloud
141GB VRAM, 276GB RAM, 24vCPUs, On-demand GPU instance in Community Cloud
$ 3.59
hr
B200 Community Cloud
180GB VRAM, 283GB RAM, 28vCPUs, On-demand GPU instance in Community Cloud
$ 5.98
hr
RTX Pro 6000 Community Cloud
96GB VRAM, 188GB RAM, 16vCPUs, On-demand GPU instance in Community Cloud
$ 1.69
hr
H100 PCIe Community Cloud
80GB VRAM, 188GB RAM, 16vCPUs, On-demand GPU instance in Community Cloud
$ 1.99
hr
H100 SXM Community Cloud
80GB VRAM, 125GB RAM, 20vCPUs, On-demand GPU instance in Community Cloud
$ 2.33
hr
A100 PCIe Community Cloud
80GB VRAM, 117GB RAM, 8vCPUs, On-demand GPU instance in Community Cloud
$ 1.19
hr
A100 SXM Community Cloud
80GB VRAM, 125GB RAM, 16vCPUs, On-demand GPU instance in Community Cloud
$ 1.39
hr
L40S Community Cloud
48GB VRAM, 94GB RAM, 16vCPUs, On-demand GPU instance in Community Cloud
$ 0.79
hr
RTX 6000 Ada Community Cloud
48GB VRAM, 167GB RAM, 10vCPUs, On-demand GPU instance in Community Cloud
$ 0.74
hr
A40 Community Cloud
48GB VRAM, 50GB RAM, 9vCPUs, On-demand GPU instance in Community Cloud
$ 0.35
hr
L40 Community Cloud
48GB VRAM, 94GB RAM, 8vCPUs, On-demand GPU instance in Community Cloud
$ 0.69
hr
RTX A6000 Community Cloud
48GB VRAM, 50GB RAM, 9vCPUs, On-demand GPU instance in Community Cloud
$ 0.33
hr
RTX 5090 Community Cloud
32GB VRAM, 35GB RAM, 9vCPUs, On-demand GPU instance in Community Cloud
$ 0.69
hr
L4 Community Cloud
24GB VRAM, 50GB RAM, 12vCPUs, On-demand GPU instance in Community Cloud
$ 0.44
hr
RTX 3090 Community Cloud
24GB VRAM, 125GB RAM, 16vCPUs, On-demand GPU instance in Community Cloud
$ 0.22
hr
RTX 4090 Community Cloud
24GB VRAM, 41GB RAM, 6vCPUs, On-demand GPU instance in Community Cloud
$ 0.34
hr
RTX A5000 Community Cloud
24GB VRAM, 25GB RAM, 9vCPUs, On-demand GPU instance in Community Cloud
$ 0.16
hr
B200 Flex Workers
180GB VRAM, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
H200 Flex Workers
141GB VRAM, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
H100 PRO Flex Workers
80GB VRAM, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
A100 Flex Workers
80GB VRAM, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
L40, L40S, 6000 Ada PRO Flex Workers
48GB VRAM, High throughput GPU, yet still very cost-effective, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
A6000, A40 Flex Workers
48GB VRAM, Extreme inference throughput on LLMs like Llama 3 7B, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
RTX 5090 PRO Flex Workers
32GB VRAM, A cost-effective option for running big models, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
RTX 4090 PRO Flex Workers
24GB VRAM, Extreme throughput for small-to-medium models, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
L4, A5000, 3090 Flex Workers
24GB VRAM, Extreme throughput for small-to-medium models, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
A4000, A4500, RTX 4000 Flex Workers
16GB VRAM, Great for small-to-medium sized inference workloads, Scale up during traffic spikes and return to idle after completing jobs
$ 0
per second
B200 Active Workers
180GB VRAM, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
H200 Active Workers
141GB VRAM, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
H100 PRO Active Workers
80GB VRAM, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
A100 Active Workers
80GB VRAM, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
L40, L40S, 6000 Ada PRO Active Workers
48GB VRAM, High throughput GPU, yet still very cost-effective, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
A6000, A40 Active Workers
48GB VRAM, Extreme inference throughput on LLMs like Llama 3 7B, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
RTX 5090 PRO Active Workers
32GB VRAM, A cost-effective option for running big models, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
RTX 4090 PRO Active Workers
24GB VRAM, Extreme throughput for small-to-medium models, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
L4, A5000, 3090 Active Workers
24GB VRAM, Extreme throughput for small-to-medium models, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
A4000, A4500, RTX 4000 Active Workers
16GB VRAM, Great for small-to-medium sized inference workloads, Always-on workers that eliminate cold starts, billed continuously with up to 30% discount
$ 0
per second
Volume Storage Running Pods
Persistent storage billed for running Pods
$ 0.1
per GB per month
Volume Storage Idle Pods
Persistent storage billed for stopped Pods
$ 0.2
per GB per month
Container Disk Storage Running Pods
Temporary storage billed for running Pods
$ 0.1
per GB per month
Network Volume Under 1TB
Persistent network storage under 1TB
$ 0.07
per GB per month
Network Volume Over 1TB
Persistent network storage over 1TB
$ 0.05
per GB per month
3-Month Savings Plan
Up to 15% savings compared to on-demand pricing for long-term commitments
---
---
12-Month Savings Plan
Up to 25% savings compared to on-demand pricing for long-term commitments
---
---
24-Month Savings Plan
Up to 40% savings compared to on-demand pricing for long-term commitments
---
---
24-Month H200 Savings
H200 GPU with 24-month commitment, up to 40% savings
$ 2.39
hr
24-Month H100 Savings
H100 GPU with 24-month commitment, up to 40% savings
$ 1.31
hr
24-Month A100 Savings
A100 GPU with 24-month commitment, up to 40% savings
$ 0.98
hr
24-Month L40S Savings
L40S GPU with 24-month commitment, up to 40% savings
$ 0.52
hr
Discover alternative AI tools similar to RunPod that may better suit your needs.
Explore professional roles that benefit from using RunPod.
Advance innovation with AI tools for academic research, data analysis, knowledge representation, decision-making, and AI-powered chatbots.
6692 Tools
Browse vetted AI tools and real‑world use cases for Software Developer. Compare pricing, features & tips; to automate busywork. Find the right AI tools.
5965 Tools
Increase your productivity with these AI solutions for automation, quality assurance, integration, collaboration, and code creation.
5288 Tools
AI resources for website creation, formatting, graphics, and landing pages, SEO, WP usability, and streamlined UI.
4003 Tools