HIGH PERFORMANCE AI COMPUTING GPU SERVER

8X NVIDIA H100
Tensor Core SERVER

NVIDIA H100 SERVER

Based on the NVIDIA Hopper™ architecture, NVIDIA H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4x faster training over the prior generation for GPT-3 (175B) models.

NVIDIA Maestro H100 now
available on ProX PC

Unprecedented acceleration for the world’s most demanding AI and machine learning workloads

AI Training and AI Inference
NVIDIA MAESTRO H100

8 x NVIDIA H100 80 GB SXM

Upto 3TB RDIMM (2R) or

Upto 16TB RDIMM-3DS (2S8Rx4)

Supports 5th and 4th Gen Intel® Xeon® Scalable Processors

upto 128 cores / 256 threads @ 4.1GHz

upto 400 G Network

Availability

8X NVIDIA H100 GPU Server

Key Features
  • Ready-to-ship
  • Optimal Price
  • Fast & Stable Connectivity

The world's most powerful GPU servers

NVIDIA H100 supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H100’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for HPC workloads.

Llama2 70B inference
1.9x''faster
GPT-3 175B inference
1.6x''faster
High-performance computing
110x''faster

NVIDIA H100
Specifications

NVIDIA H100
FP6434 TFLOPS
FP64 Tensor Core67 TFLOPS
FP3267 TFLOPS
TF32 Tensor Core989 TFLOPS2
BFLOAT16 Tensor Core1,979 TFLOPS2
FP16 Tensor Core1,979 TFLOPS2
FP8 Tensor Core3,958 TFLOPS2
INT8 Core3,958 TFLOPS2
GPU Memory80GB
GPU Memory Bandwith3.35TB/s
Decoders7 NVDEC 7JPEG
InterconnectNVIDIA NVLink®: 900GB/s PCIe Gen5: 128GB/s
  1. Preliminary specifications. May be subject to change.
  2. With sparsity.

Reserve the 8X NVIDIA H100 Server now

Get ready to build, test, and deploy