Partner Directory

Cloud GPU Providers

Below is a list of cloud GPU providers, along with some example configurations they offer. For each provider, we provide a brief overview and a few example instance types, along with their specifications and hourly pricing. Use these tables to compare offerings across different platforms. (Prices are in USD per hour. Configurations and prices are examples and may vary by region; please refer to the provider’s site for the most current details.)
Premier Partner
vultr_w
Vultr – A USA-based global cloud provider offering GPU instances (A16, A40, L40S, GH200) with both single and multi-GPU configurations across global regions.
Visit Vultr
A16
GPUs
1x A16
Storage
16
vCPUs
6
RAM (GB)
64
Price/Hour
$0.47
Source
A40
GPUs
1x A40
Storage
48
vCPUs
24
RAM (GB)
120
Price/Hour
$1.71
Source
L40S
GPUs
1x L40S
Storage
48
vCPUs
16
RAM (GB)
180
Price/Hour
$1.67
Source
B200
GPUs
8 x B200
Storage
1536
vCPUs
248
RAM (GB)
2826
Price/Hour
$2.89
Source
AMD MI325X
GPUs
8x AMD MI325X
Storage
2048
vCPUs
248
RAM (GB)
2872
Price/Hour
$4.61
Source
Alibaba_w
Alibaba Cloud – A cloud platform with a strong presence in Asia (headquartered in Singapore). It offers a range of GPU instances on its global infrastructure, suitable for AI, rendering, and HPC workloads, with competitive pricing especially in Asian regions.
Visit Partner
ecs.gn6i-c4g1.xlarge
GPUs
1x T4
Storage
16
vCPUs
4
RAM (GB)
15
Price/Hour
$1.18
Source
ecs.gn7i-c8g1.2xlarge
GPUs
1x A10
Storage
24
vCPUs
8
RAM (GB)
30
Price/Hour
$2.86
Source
ecs.gn6e-c12g1.3xlarge
GPUs
1x V100
Storage
32
vCPUs
12
RAM (GB)
92
Price/Hour
$3.51
Source
AWS_w
Amazon Web Services (AWS) – The largest global cloud provider (USA). AWS offers a broad array of GPU instances integrated into its cloud ecosystem. These range from cost-effective instances for light workloads to multi-GPU clusters for heavy AI training. AWS’s GPU instances (the G- and P- series) benefit from the robust AWS infrastructure and services.
Visit Partner
g5g.xlarge
GPUs
1x T4G
Storage
2 TiB
vCPUs
4
RAM (GB)
16
Price/Hour
$0.42
Source
p2.xlarge
GPUs
1x K80
Storage
12
vCPUs
4
RAM (GB)
61
Price/Hour
$0.9
Source
p3.2xlarge
GPUs
1x V100
Storage
16
vCPUs
8
RAM (GB)
61
Price/Hour
$3.06
Source
p5.48xlarge
GPUs
8x H100
Storage
8 x 3840 GB SSD
vCPUs
192
RAM (GB)
2048
Price/Hour
$55.04
Source
p6.b200.48xlarge
GPUs
8x B200
Storage
8x 3.84 TB NVMe
vCPUs
192
RAM (GB)
2048
Price/Hour
$113.93
Source
buildAI_w
Build AI – A USA-based startup focusing on affordable GPUs for machine learning training. Build AI provides both on-demand and spot-priced GPU servers to help save costs, offering modern NVIDIA A100 and H100 GPUs. This platform is ideal for training deep learning models on a budget, with lower prices for interruptible (spot) instances.
Visit Partner
A100 (spot)
GPUs
1x A100
Storage
40
vCPUs
15
RAM (GB)
200
Price/Hour
$1.05
Source
A100
GPUs
1x A100
Storage
40
vCPUs
15
RAM (GB)
200
Price/Hour
$1.42
Source
A100 (spot)
GPUs
1x A100
Storage
80
vCPUs
30
RAM (GB)
225
Price/Hour
$1.45
Source
A100
GPUs
1x A100
Storage
80
vCPUs
30
RAM (GB)
225
Price/Hour
$1.97
Source
H100 (spot)
GPUs
1x H100
Storage
80
vCPUs
26
RAM (GB)
225
Price/Hour
$2.79
Source
H100
GPUs
1x H100
Storage
80
vCPUs
26
RAM (GB)
225
Price/Hour
$3.85
Source
civo_w
Civo – A UK-based cloud platform known for simplicity and Kubernetes services (K3s). Civo has introduced GPU instances for AI and machine learning workloads. It provides configurations from single GPU VMs to multi-GPU setups, with A100 and L40S GPUs available. Civo’s GPU plans are straightforward, making it easy to deploy and scale GPU workloads in the cloud.
Visit Partner
A100-40 Small
GPUs
1x A100
Storage
40
vCPUs
8
RAM (GB)
64
Price/Hour
$1.79
Source
L40S Medium
GPUs
1x L40S
Storage
48
vCPUs
8
RAM (GB)
64
Price/Hour
$1.79
Source
A100-40 Medium (2x)
GPUs
2x A100
Storage
80
vCPUs
16
RAM (GB)
128
Price/Hour
$3.57
Source
A100-40 Large (4x)
GPUs
4x A100
Storage
160
vCPUs
32
RAM (GB)
255
Price/Hour
$7.14
Source
A100-40 XL (8x)
GPUs
8x A100
Storage
320
vCPUs
64
RAM (GB)
512
Price/Hour
$14.29
Source
Contabo_w
Contabo – A Germany-based provider known for low-cost servers. Contabo offers a couple of high-end GPU configurations on dedicated hardware. These instances pack multiple GPUs (L40S or H100) and provide bare-metal performance at competitive hourly rates (note: Contabo often charges a one-time setup fee for dedicated resources).
Visit Partner
L40S
GPUs
4x L40S
Storage
192
vCPUs
64
RAM (GB)
512
Price/Hour
$4.51
Source
H100
GPUs
4x H100
Storage
320
vCPUs
64
RAM (GB)
512
Price/Hour
$11.04
Source
coreweave_w
CoreWeave – A USA-based GPU cloud built specifically for AI workloads. CoreWeave offers a massive fleet of NVIDIA GPUs with preemptible pricing, low-latency networking, and rapid provisioning. It’s a popular choice among AI startups and enterprise labs running large-scale training or inference jobs.
Visit Partner
L40S
GPUs
1x L40S
Storage
48GB
vCPUs
8
RAM (GB)
64GB
Price/Hour
$1.29
Source
A100 80GB
GPUs
1x A100
Storage
80GB
vCPUs
20
RAM (GB)
160GB
Price/Hour
$1.49
Source
H100 SXM
GPUs
1x H100
Storage
80GB
vCPUs
32
RAM (GB)
240GB
Price/Hour
$2.65
Source
caruso_w
Crusoe – A USA-based cloud provider focused on sustainable computing (it powers its cloud with stranded energy such as flare gas). Crusoe offers GPU instances for AI and HPC with an emphasis on cost efficiency and green energy. Their instances can be scaled up to 10× the base configuration for larger workloads.
Visit Partner
A40
GPUs
1x A40
Storage
48
vCPUs
6
RAM (GB)
60
Price/Hour
$1.1
Source
A100 40GB
GPUs
1x A100
Storage
40
vCPUs
12
RAM (GB)
120
Price/Hour
$1.45
Source
L40S
GPUs
1x L40S
Storage
48
vCPUs
8
RAM (GB)
147
Price/Hour
$1.45
Source
A100 80GB
GPUs
1x A100
Storage
80
vCPUs
12
RAM (GB)
120
Price/Hour
$1.65
Source
cudo-logo-white
CUDO Compute – A UK-based high-performance GPU cloud provider. CUDO offers a variety of NVIDIA (and some AMD) GPU options, including cutting-edge models like the NVIDIA H100 and even experimental GPUs, aimed at AI researchers and enterprises needing serious compute power. They emphasize performance and have data centers in multiple regions.
Visit Partner
RTX A4000 Ada
GPUs
1x A4000
Storage
20
vCPUs
4
RAM (GB)
16
Price/Hour
$0.46
Source
RTX A5000
GPUs
1x A5000
Storage
24
vCPUs
6
RAM (GB)
24
Price/Hour
$0.49
Source
V100
GPUs
1x V100
Storage
16
vCPUs
4
RAM (GB)
16
Price/Hour
$0.54
Source
A100 PCIe
GPUs
1x A100
Storage
80
vCPUs
12
RAM (GB)
48
Price/Hour
$1.83
Source
H100 SXM
GPUs
1x H100
Storage
80
vCPUs
12
RAM (GB)
48
Price/Hour
$3.18
Source
data_crunch_w
DataCrunch – A Finland-based provider that deploys the latest NVIDIA GPUs in European data centers. DataCrunch caters to researchers and startups with high-end hardware (like NVIDIA A100, H100, and even H200 GPUs) at competitive prices. It’s a great choice for those needing powerful GPUs in Europe without using a hyperscaler.
Visit Partner
Tesla V100 16GB
GPUs
1x V100
Storage
16
vCPUs
6
RAM (GB)
23
Price/Hour
$0.39
Source
RTX A6000 48GB
GPUs
1x A6000
Storage
48
vCPUs
10
RAM (GB)
60
Price/Hour
$1.01
Source
A100 SXM4 80GB
GPUs
1x A100
Storage
80
vCPUs
22
RAM (GB)
120
Price/Hour
$1.89
Source
H100 SXM5 80GB
GPUs
1x H100
Storage
80
vCPUs
30
RAM (GB)
120
Price/Hour
$2.65
Source
H200 SXM5 141GB
GPUs
1x H200
Storage
141
vCPUs
44
RAM (GB)
185
Price/Hour
$3.03
Source
digitalocean_w
DigitalOcean – A USA-based developer-friendly cloud platform known for simplicity. DigitalOcean has recently introduced GPU “Droplets,” currently offering powerful NVIDIA H100 instances for compute-intensive tasks. This allows developers to integrate high-end GPUs into DigitalOcean’s easy-to-use cloud environment.
Visit Partner
Nvidia H100 SXM
GPUs
1x H100
Storage
5 TiB NVMe
vCPUs
20
RAM (GB)
240
Price/Hour
$3.39
Source
NVIDIA H200
GPUs
1x H200
Storage
5 TiB NVMe
vCPUs
24
RAM (GB)
240
Price/Hour
$3.44
Source
exoscale_w
Exoscale – A Switzerland-based European cloud provider offering GPU instances suitable for machine learning, rendering, and compute-heavy tasks. Known for GDPR compliance and regional datacenters.
Visit Partner
3080ti Small
GPUs
1x 3080ti
Storage
12
vCPUs
12
RAM (GB)
56
Price/Hour
$1.04
Source
P100 Small
GPUs
1x P100
Storage
16
vCPUs
12
RAM (GB)
56
Price/Hour
$1.17
Source
A40 Small
GPUs
1x A40
Storage
48
vCPUs
12
RAM (GB)
56
Price/Hour
$2.14
Source
P100 Huge
GPUs
4x P100
Storage
64
vCPUs
48
RAM (GB)
225
Price/Hour
$2.82
Source
A40 Huge
GPUs
8x A40
Storage
384
vCPUs
96
RAM (GB)
448
Price/Hour
$17.06
Source
fluidstak_w
FluidStack – A UK-based cloud platform specialized in AI training and inference. FluidStack aggregates underutilized or distributed GPUs to provide affordable compute. It offers a range of NVIDIA GPU types, from consumer-grade (RTX/A-series) to data center GPUs (A100/H100), allowing customers to train models at lower cost.
Visit Partner
Nvidia A4000
GPUs
1x A4000
Storage
16
vCPUs
36
RAM (GB)
128
Price/Hour
$0.4
Source
Nvidia A5000
GPUs
1x A5000
Storage
24
vCPUs
36
RAM (GB)
128
Price/Hour
$0.55
Source
Nvidia A40
GPUs
1x A40
Storage
48
vCPUs
32
RAM (GB)
128
Price/Hour
$0.6
Source
Nvidia A6000
GPUs
1x A6000
Storage
48
vCPUs
48
RAM (GB)
128
Price/Hour
$0.8
Source
Nvidia L40
GPUs
1x L40
Storage
48
vCPUs
32
RAM (GB)
48
Price/Hour
$1.25
Source
Nvidia A100 40GB
GPUs
1x A100
Storage
40
vCPUs
32
RAM (GB)
128
Price/Hour
$1.65
Source
Nvidia A100 80GB
GPUs
1x A100
Storage
80
vCPUs
48
RAM (GB)
256
Price/Hour
$1.8
Source
Nvidia H100 PCIe
GPUs
1x H100
Storage
80
vCPUs
48
RAM (GB)
256
Price/Hour
$2.89
Source
flyio_TRACE_w
Fly.io – A USA-based application hosting platform. Primarily known for deploying web apps to global regions, Fly.io also offers GPU-backed machines in select regions. These GPU instances allow running AI inference or smaller training jobs at the edge, benefiting from Fly.io’s distributed infrastructure.
Visit Partner
L40S
GPUs
1x L40S
Storage
48
vCPUs
RAM (GB)
Price/Hour
$1.25
Source
A10
GPUs
1x A10
Storage
24
vCPUs
RAM (GB)
Price/Hour
$1.5
Source
A100 40G PCIe
GPUs
1x A100
Storage
40
vCPUs
RAM (GB)
Price/Hour
$2.5
Source
A100 80G SXM
GPUs
1x A100
Storage
80
vCPUs
RAM (GB)
Price/Hour
$3.5
Source
Google__w
Google Cloud – A USA-based global cloud platform by Google. Google Cloud’s GPU offerings include the latest NVIDIA chips (L4, A100, etc.), integrated with Google’s AI and data services. It offers flexible instance types; for example, the G2 series with L4 GPUs for inference, and the A2 series with A100 GPUs for heavy training. Users can scale from 1 GPU up to 16 GPUs in a single VM.
Visit Partner
g2-standard-4
GPUs
1x L4
Storage
24
vCPUs
4
RAM (GB)
16
Price/Hour
$0.71
Source
g2-standard-48
GPUs
4x L4
Storage
96
vCPUs
48
RAM (GB)
192
Price/Hour
$4
Source
a2-highgpu-1g
GPUs
1x A100
Storage
40
vCPUs
12
RAM (GB)
85
Price/Hour
$3.67
Source
a2-highgpu-4g
GPUs
4x A100
Storage
160
vCPUs
48
RAM (GB)
340
Price/Hour
$14.69
Source
a2-megagpu-16g
GPUs
16x A100
Storage
640
vCPUs
96
RAM (GB)
1360
Price/Hour
$55.74
Source
Green AI Cloud – A Sweden-based provider focused on sustainability. It offers high-performance GPU servers powered by renewable energy. Green AI Cloud specializes in large multi-GPU configurations (for example, servers with up to 8× NVIDIA H100 or H200 GPUs) aimed at organizations that need extreme performance with a smaller carbon footprint.
Visit Partner
H200
GPUs
1x H200
Storage
141
vCPUs
112
RAM (GB)
2048
Price/Hour
$2.75
Source
H200 (8x)
GPUs
8x H200
Storage
1128
vCPUs
112
RAM (GB)
2048
Price/Hour
$22
Source
Hetzner_w
Hetzner – A Germany-based provider known for highly cost-effective hosting. Hetzner’s GPUs are offered as add-ons to its dedicated servers rather than on-demand VMs. This provides bare-metal performance at low hourly rates (note: a one-time setup fee applies). It’s a great option for steady, long-running GPU workloads where cost savings are paramount.
Visit Partner
GEX44
GPUs
1x RTX 4000
Storage
2x 1.92 TB NVMe SSD
vCPUs
Intel Core i5-13500
RAM (GB)
64
Price/Hour
$0.34
Source
GEX130
GPUs
1x RTX 6000
Storage
2x 1.92 TB NVMe
vCPUs
Intel Xeon Gold 5412U
RAM (GB)
128
Price/Hour
$1.51
Source
hyperstack_w
Hyperstack – A UK-based cloud specializing in affordable GPUs. Hyperstack offers a wide range of NVIDIA GPU configurations, from single low-cost GPUs (starting at just $0.15/h) to multi-GPU setups. It’s known for having some of the lowest prices in the market, making it popular for budget-conscious AI practitioners and hobbyists.
Visit Partner
NVIDIA A4000
GPUs
1x A4000
Storage
16
vCPUs
6
RAM (GB)
24
Price/Hour
$0.15
Source
NVIDIA A5000
GPUs
1x A5000
Storage
24
vCPUs
8
RAM (GB)
24
Price/Hour
$0.25
Source
NVIDIA A6000
GPUs
1x A6000
Storage
48
vCPUs
28
RAM (GB)
58
Price/Hour
$0.5
Source
NVIDIA A100 80GB
GPUs
1x A100
Storage
80
vCPUs
28
RAM (GB)
120
Price/Hour
$1.35
Source
NVIDIA H100 80GB
GPUs
1x H100
Storage
80
vCPUs
28
RAM (GB)
180
Price/Hour
$1.9
Source
koyeb_w
Koyeb – A France-based developer-centric cloud platform offering container-based GPU compute services. Designed for deployment automation and scalable GPU-backed inference.
Visit Partner
RTX4000-ADA
GPUs
1x RTX4000
Storage
20
vCPUs
6
RAM (GB)
44
Price/Hour
$0.5
Source
V100-SXM2
GPUs
1x V100
Storage
16
vCPUs
8
RAM (GB)
44
Price/Hour
$0.85
Source
L4
GPUs
1x L4
Storage
24
vCPUs
15
RAM (GB)
44
Price/Hour
$1
Source
L40S
GPUs
1x L40S
Storage
48
vCPUs
30
RAM (GB)
92
Price/Hour
$2
Source
A100
GPUs
1x A100
Storage
80
vCPUs
15
RAM (GB)
180
Price/Hour
$2.7
Source
H100
GPUs
1x H100
Storage
80
vCPUs
15
RAM (GB)
180
Price/Hour
$3.3
Source
lambda_w
Lambda Labs – A USA-based GPU cloud provider and hardware vendor specializing in deep learning. Lambda Labs offers on-demand and reserved GPU servers with a wide range of configurations, from single-GPU instances (including GeForce/RTX cards) to multi-GPU powerhouse servers (8× A100 or H100). It’s a go-to for many ML engineers due to its focus on AI (and integrations like pre-built ML environments).
Visit Partner
1x RTX6000
GPUs
1x RTX6000
Storage
24
vCPUs
14
RAM (GB)
46
Price/Hour
$0.5
Source
1x A100 40GB
GPUs
1x A100 40GB
Storage
40
vCPUs
30
RAM (GB)
200
Price/Hour
$1.29
Source
1x H100 PCIe
GPUs
1x H100
Storage
80
vCPUs
26
RAM (GB)
200
Price/Hour
$2.49
Source
4x A100
GPUs
4x A100
Storage
160
vCPUs
120
RAM (GB)
800
Price/Hour
$5.16
Source
8x H100 SXM
GPUs
8x H100
Storage
640
vCPUs
208
RAM (GB)
1800
Price/Hour
$23.92
Source
linode_w
Linode – A USA-based cloud provider (part of Akamai) known for simple and affordable VPS. Linode introduced GPU instances (“Accelerators”) that utilize NVIDIA RTX 6000 GPUs.
Visit Partner
GPU 1
GPUs
1x RTX6000
Storage
24
vCPUs
8
RAM (GB)
32
Price/Hour
$1.5
Source
GPU 2
GPUs
2x RTX6000
Storage
48
vCPUs
16
RAM (GB)
64
Price/Hour
$3
Source
GPU 3
GPUs
3x RTX6000
Storage
72
vCPUs
20
RAM (GB)
96
Price/Hour
$4.5
Source
GPU 4
GPUs
4x RTX6000
Storage
96
vCPUs
24
RAM (GB)
128
Price/Hour
$6
Source
Massed_compure_w
Massed Compute – A USA-based GPU infrastructure provider offering on-demand access to GPUs. Offers a variety of NVIDIA GPU options with flexible scaling.
Visit Partner
A30
GPUs
1x A30
Storage
24
vCPUs
16
RAM (GB)
48
Price/Hour
$0.33
Source
A5000
GPUs
1x A5000
Storage
24
vCPUs
10
RAM (GB)
48
Price/Hour
$0.41
Source
A6000
GPUs
1x A6000
Storage
48
vCPUs
6
RAM (GB)
48
Price/Hour
$0.57
Source
RTX6000 Ada
GPUs
1x RTX 6000
Storage
48
vCPUs
12
RAM (GB)
64
Price/Hour
$0.97
Source
L40S
GPUs
1x L40S
Storage
48
vCPUs
22
RAM (GB)
128
Price/Hour
$1.1
Source
A100 80GB
GPUs
1x A100
Storage
80
vCPUs
12
RAM (GB)
64
Price/Hour
$1.72
Source
H100
GPUs
1x H100
Storage
80
vCPUs
20
RAM (GB)
128
Price/Hour
$2.98
Source
Azure_w
Microsoft Azure – A USA-based global cloud platform by Microsoft. Offers GPU-enabled VMs (N-series) using NVIDIA T4, A100, H100, etc. for AI and HPC workloads.
Visit Partner
NC4as T4 v3
GPUs
1x T4
Storage
16
vCPUs
4
RAM (GB)
28
Price/Hour
$0.53
Source
NC6
GPUs
1x K80
Storage
12
vCPUs
6
RAM (GB)
56
Price/Hour
$0.9
Source
ND6s
GPUs
1x P40
Storage
24
vCPUs
6
RAM (GB)
112
Price/Hour
$2.07
Source
NC24ads A100 v4
GPUs
1x A100
Storage
80
vCPUs
24
RAM (GB)
220
Price/Hour
$4.78
Source
NC40ads H100 v5
GPUs
1x H100
Storage
80
vCPUs
40
RAM (GB)
320
Price/Hour
$8.82
Source
ND96asr A100 v4
GPUs
8x A100
Storage
320
vCPUs
96
RAM (GB)
900
Price/Hour
$27.2
Source
nebius_w
Nebius – A Netherlands-based AI cloud platform focused on high-end GPU compute including H100 and H200 for European workloads.
Visit Partner
L40S PCIe
GPUs
1x L40S
Storage
48
vCPUs
8
RAM (GB)
32
Price/Hour
$1.58
Source
H100 SXM
GPUs
1x H100
Storage
80
vCPUs
20
RAM (GB)
160
Price/Hour
$3.55
Source
H100 SXM (2x)
GPUs
2x H100
Storage
160
vCPUs
40
RAM (GB)
320
Price/Hour
$7.1
Source
H100 SXM (4x)
GPUs
4x H100
Storage
320
vCPUs
80
RAM (GB)
640
Price/Hour
$14.2
Source
H200 (8x)
GPUs
8x H200
Storage
1128
vCPUs
RAM (GB)
Price/Hour
$20.72
Source
H100 SXM (8x)
GPUs
8x H100
Storage
640
vCPUs
160
RAM (GB)
1280
Price/Hour
$28.39
Source
oblivus_w
Oblivus – A UK-based cloud GPU provider offering simple, low-cost access to GPUs like A4000, A5000, A100, and H100 for AI workloads.
Visit Partner
A4000
GPUs
1x A4000
Storage
16
vCPUs
6
RAM (GB)
24
Price/Hour
$0.2
Source
A5000
GPUs
1x A5000
Storage
24
vCPUs
8
RAM (GB)
30
Price/Hour
$0.5
Source
A6000
GPUs
1x A6000
Storage
48
vCPUs
16
RAM (GB)
60
Price/Hour
$0.55
Source
A100 80GB PCIe
GPUs
1x A100
Storage
80
vCPUs
28
RAM (GB)
120
Price/Hour
$1.47
Source
H100 PCIe
GPUs
1x H100
Storage
80
vCPUs
28
RAM (GB)
180
Price/Hour
$1.98
Source
oracle_w
Oracle Cloud – A USA-based enterprise cloud provider offering AMD MI300X GPU servers for massive AI and HPC compute workloads.
Visit Partner
BM.GPU.MI300X.8
GPUs
8x MI300X
Storage
1536
vCPUs
56
RAM (GB)
2000
Price/Hour
$6
Source
Logo_OVH_W
OVHcloud – A France-based global cloud provider with competitively priced GPU instances such as V100, L4, and H100 across European data centers.
Visit Partner
t1-le-45
GPUs
1x V100
Storage
16
vCPUs
8
RAM (GB)
45
Price/Hour
$0.77
Source
l4-90
GPUs
1x L4
Storage
24
vCPUs
22
RAM (GB)
90
Price/Hour
$1
Source
t1-le-90
GPUs
2x V100
Storage
32
vCPUs
16
RAM (GB)
90
Price/Hour
$1.55
Source
h100-380
GPUs
1x H100
Storage
80
vCPUs
30
RAM (GB)
380
Price/Hour
$2.99
Source
h100-760
GPUs
2x H100
Storage
160
vCPUs
60
RAM (GB)
760
Price/Hour
$5.98
Source
replicate-wordmark_w
Replicate – A USA-based service for running ML models via API. Offers GPU-backed infrastructure for pay-as-you-go inference and fine-tuning.
Visit Partner
Nvidia T4
GPUs
1x T4
Storage
16
vCPUs
4
RAM (GB)
16
Price/Hour
$0.81
Source
Nvidia A40 (Large)
GPUs
1x A40
Storage
48
vCPUs
10
RAM (GB)
72
Price/Hour
$2.61
Source
Nvidia A100 (80GB)
GPUs
1x A100
Storage
80
vCPUs
10
RAM (GB)
144
Price/Hour
$5.04
Source
4x Nvidia A40 (Large)
GPUs
4x A40
Storage
192
vCPUs
40
RAM (GB)
288
Price/Hour
$10.44
Source
8x Nvidia A100 (80GB)
GPUs
8x A100
Storage
640
vCPUs
80
RAM (GB)
960
Price/Hour
$40.32
Source
runpod_w
RunPod – A USA-based provider that lets you spin up GPU containers in seconds across 30+ regions. It offers flexible, on-demand GPU cloud instances with auto-scaling and free data transfers. RunPod’s platform is popular for its ease of use and low-cost spot pricing for AI workloads.
Visit Partner
A30
GPUs
1x A30
Storage
24
vCPUs
8
RAM (GB)
31
Price/Hour
$0.22
Source
RTX A4000
GPUs
1x A4000
Storage
16
vCPUs
4
RAM (GB)
20
Price/Hour
$0.32
Source
A4500
GPUs
1x A4500
Storage
20
vCPUs
4
RAM (GB)
29
Price/Hour
$0.34
Source
A5000
GPUs
1x A5000
Storage
24
vCPUs
4
RAM (GB)
24
Price/Hour
$0.36
Source
A40
GPUs
1x A40
Storage
48
vCPUs
9
RAM (GB)
50
Price/Hour
$0.39
Source
scalewaylogo_w
Scaleway – A France-based cloud provider offering a range of NVIDIA GPU options (L4, L40S, H100, etc.) for AI, video rendering, and HPC, with pricing optimized for European customers.
Visit Partner
L4-1-24G
GPUs
1x L4
Storage
24
vCPUs
8
RAM (GB)
48
Price/Hour
$0.9
Source
GPU-3070
GPUs
1x RTX 3070
Storage
8
vCPUs
8
RAM (GB)
16
Price/Hour
$1.11
Source
L40S-1-48G
GPUs
1x L40S
Storage
48
vCPUs
8
RAM (GB)
48
Price/Hour
$1.61
Source
H100-1-80G
GPUs
1x H100
Storage
80
vCPUs
24
RAM (GB)
240
Price/Hour
$2.86
Source
L40S-8-48G
GPUs
8x L40S
Storage
384
vCPUs
64
RAM (GB)
768
Price/Hour
$12.39
Source
sesterce_w
Sesterce – A France-based cloud GPU marketplace originally known for mining hardware. Offers a wide range of low-cost and modern GPU compute options including A4000, A100, and MI300X.
Visit Partner
1x A4000
GPUs
1x A4000
Storage
16
vCPUs
4
RAM (GB)
Price/Hour
$0.15
Source
1x A5000
GPUs
1x A5000
Storage
24
vCPUs
6
RAM (GB)
Price/Hour
$0.49
Source
1x A100 80G
GPUs
1x A100
Storage
80
vCPUs
28
RAM (GB)
Price/Hour
$1.35
Source
1x H100
GPUs
1x H100
Storage
80
vCPUs
28
RAM (GB)
180
Price/Hour
$1.9
Source
1x MI300X
GPUs
1x MI300X
Storage
192
vCPUs
6
RAM (GB)
Price/Hour
$3
Source
tensorwave_w
TensorWave – A USA-based cloud provider offering AMD Instinct GPU compute (MI300X) targeted at foundation model training. Pricing is quote-based.
Visit Partner
AMD MI300X (8x)
GPUs
8x MI300X
Storage
1536
vCPUs
RAM (GB)
Price/Hour
$On Request
Source
vultr_w
Vultr – A USA-based global cloud provider offering GPU instances (A16, A40, L40S, GH200) with both single and multi-GPU configurations across global regions.
Visit Partner
A16
GPUs
1x A16
Storage
16
vCPUs
6
RAM (GB)
64
Price/Hour
$0.47
Source
A40
GPUs
1x A40
Storage
48
vCPUs
24
RAM (GB)
120
Price/Hour
$1.71
Source
L40S
GPUs
1x L40S
Storage
48
vCPUs
16
RAM (GB)
180
Price/Hour
$1.67
Source
B200
GPUs
8 x B200
Storage
1536
vCPUs
248
RAM (GB)
2826
Price/Hour
$2.89
Source
AMD MI325X
GPUs
8x AMD MI325X
Storage
2048
vCPUs
248
RAM (GB)
2872
Price/Hour
$4.61
Source