H100
Starting at $3.24/h
80GB VRAM
Currently the most powerful and commercially accessible Tensor Core GPU for large-scale AI and HPC workloads.
Deploy and manage GPU-accelerated virtual machines on Ori Global Cloud with ease. Competitive pricing. Dedicated support.
Starting at $3.24/h
80GB VRAM
Currently the most powerful and commercially accessible Tensor Core GPU for large-scale AI and HPC workloads.
Starting at $3.29/h
80GB VRAM
The most popular (and thus scarce) Tensor Core GPU used for machine learning and HPC workloads for balancing cost and efficiency.
Coming Soon!
144 GB VRAM
The next generation of AI supercomputing offers a massive shared memory space with linear scalability for giant AI models. Only available with early access.
Starting at $0.95/h
32GB VRAM
Starting at $0.83/h
16GB VRAM
Starting at $1.95/h
48GB VRAM
Starting at $2.73/h
48GB VRAM
Starting at $0.93/h
24GB VRAM
Starting at $0.54/h
16GB VRAM
Ori has experience in providing AI infrastructure on the most powerful GPU assemblies on the market —whether you need NVIDIA HGX 8x GPU boards, or a massive NVIDIA DGX ecosystem for AI at scale.
Pay 81% less on GPU compute compared to other cloud providers.
We go above and beyond to find metal for you when GPUs are scarce and unavailable.
Ori specialises only in AI use cases, enabling us to be a low-rent GPU cloud provider.
Highly configurable compute, storage and networking, from one GPU to thousands.
On-demand cloud access to NVIDIA’s top Tensor Core GPUs: H100, A100 and more.
Deploy GPU-powered virtual instances for AI training, fine-tuning and inference right now. Competitive pricing. Easy setup. Dedicated support.