Based on the NVIDIA Hopper™ architecture, NVIDIA H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4x faster training over the prior generation for GPT-3 (175B) models.
8 x NVIDIA H100 80 GB SXM
Upto 3TB RDIMM (2R) or
Upto 16TB RDIMM-3DS (2S8Rx4)
Supports 5th and 4th Gen Intel® Xeon® Scalable Processors
upto 128 cores / 256 threads @ 4.1GHz
upto 400 G Network
8X NVIDIA H100 GPU Server
NVIDIA H100 supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H100’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for HPC workloads.
NVIDIA H100 | |
---|---|
FP64 | 34 TFLOPS |
FP64 Tensor Core | 67 TFLOPS |
FP32 | 67 TFLOPS |
TF32 Tensor Core | 989 TFLOPS2 |
BFLOAT16 Tensor Core | 1,979 TFLOPS2 |
FP16 Tensor Core | 1,979 TFLOPS2 |
FP8 Tensor Core | 3,958 TFLOPS2 |
INT8 Core | 3,958 TFLOPS2 |
GPU Memory | 80GB |
GPU Memory Bandwith | 3.35TB/s |
Decoders | 7 NVDEC 7JPEG |
Interconnect | NVIDIA NVLink®: 900GB/s PCIe Gen5: 128GB/s |
|