Ori Virtual Machines

Launch Powerful GPU Cloud Instances On Demand

Deploy and manage GPU-accelerated virtual machines on Ori Global Cloud with ease. Competitive pricing. Dedicated support.

High-end GPU Instances

H100

Starting at $3.24/h
80GB VRAM

Currently the most powerful and commercially accessible Tensor Core GPU for large-scale AI and HPC workloads.

A100

Starting at $3.29/h
80GB VRAM

The most popular (and thus scarce) Tensor Core GPU used for machine learning and HPC workloads for balancing cost and efficiency.

GH200

Coming Soon!
144 GB VRAM

The next generation of AI supercomputing offers a massive shared memory space with linear scalability for giant AI models. Only available with early access.

Professional Instances

V100S

Starting at $0.95/h
32GB VRAM

V100

Starting at $0.83/h
16GB VRAM

RTX A40

Starting at $1.95/h
48GB VRAM

L40S

Starting at $2.73/h
48GB VRAM

L4

Starting at $0.93/h
24GB VRAM

A16

Starting at $0.54/h
16GB VRAM

Can't find the exact GPU setup you need?

Ori has experience in providing AI infrastructure on the most powerful GPU assemblies on the market
—whether you need NVIDIA HGX 8x GPU boards, or a massive NVIDIA DGX ecosystem for AI at scale.

Why Ori

Ship AI economically with Ori

Pay 81% less on GPU compute
compared to other cloud providers.

Availability

We go above and beyond to find metal for you when GPUs are scarce and unavailable.

Pricing

Ori specialises only in AI use cases, enabling us to be a low-rent GPU cloud provider.

Scalability

Highly configurable compute, storage and networking, from one GPU to thousands.

Range

On-demand cloud access to NVIDIA’s top Tensor Core GPUs: H100, A100 and more.

Launch an instance in seconds

Deploy GPU-powered virtual instances for AI training, fine-tuning and
inference right now. Competitive pricing. Easy setup. Dedicated support.