Unprecedented acceleration for the world’s most
demanding AI and machine learning workloads
starting at $2.30 per hour

Availability

Mini Cluster 64 H100 GPUs


Base Cluster 248 H100 GPUs

Pricing

  • Starting at $2.30 per hour

Key Features

  • NVIDIA Quantum-2 3200Gb/s
    InfiniBand Networking
  • Non-Blocking InfiniBand Network
    Design
  • NVIDIA H100 SXM with FP8
    Support

Enterprise-ready at any scale and any location

Clusters at any size

Vultr's enterprise-ready infrastructure seamlessly supports any cluster size of NVIDIA H100 and H200 GPUs. Whether you require a small cluster or a massive deployment, Vultr ensures reliable, high-performance computing to meet your specific needs.

Globally available, locally accessible

Large clusters of NVIDIA H100 and H200 GPUs are available where you need them, thanks to Vultr’s extensive infrastructure. With 32 global cloud data center locations across six continents, we guarantee low latency and high availability, enabling your enterprise to achieve optimal performance worldwide.

Learn more about Vultr’s data center locations

Enterprise-grade compliance and security

Vultr ensures our platform, products, and services meet diverse global compliance, privacy, and security needs, covering areas such as server availability, data protection, and privacy. Our commitment to industry-wide privacy and security frameworks, including ISO and SOC 2 Type 2 standards, demonstrates our dedication to protecting our customers' data.

Learn more about Vultr’s security and compliance

Purpose-built for AI, simulation, and
data analytics

AI, complex simulations, and massive datasets require multiple GPUs with extremely fast interconnections and a fully accelerated software stack. The NVIDIA HGX™ AI supercomputing platform brings together the full power of NVIDIA GPUs, NVLink®, NVIDIA networking, and fully optimized AI and high-performance computing (HPC) software stacks to provide the highest application performance and drive the fastest time to insights.

no form fill or personal details required for access


The world’s most powerful GPU

NVIDIA H200 supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for HPC workloads.

Llama2 70B inference

1.9x faster

GPT-3 175B inference

1.6x faster

High-performance computing

110x faster

NVIDIA H100 & H200
Specifications

NVIDIA H100 SXM NVIDIA H200 SXM1
FP64 34 TFLOPS 34 TFLOPS
FP64 Tensor Core 67 TFLOPS 67 TFLOPS
FP32 67 TFLOPS 67 TFLOPS
TF32 Tensor Core 989 TFLOPS2 989 TFLOPS2
BFLOAT16 Tensor Core 1,979 TFLOPS2 1,979 TFLOPS2
FP16 Tensor Core 1,979 TFLOPS2 1,979 TFLOPS2
FP8 Tensor Core 3,958 TFLOPS2 3,958 TFLOPS2
INT8 Core 3,958 TFLOPS2 3,958 TFLOPS2
GPU Memory 80GB 141GB
GPU Memory Bandwith 3.35TB/s 4.8TB/s
Decoders 7 NVDEC
7JPEG
7 NVDEC
7JPEG
Interconnect NVIDIA NVLink®: 900GB/s
PCIe Gen5: 128GB/s
NVIDIA NVLink®: 900GB/s
PCIe Gen5: 128GB/s
1Preliminary specifications. May be subject to change.
2With sparsity.