Performance Recipes are ready-to-use templates for evaluating performance of specific AI use cases across hardware and software combinations. These containerized recipes allow users to quickly set up and run standardized benchmarking methodology in their own environment, ensuring consistent and comparable results across platforms.
These Performance Recipes support performance characterization
- across a variety of defined AI workloads, including pre-training, fine tuning, and inference.
- across GPU-based infrastructure, whether running on-premises or with cloud service providers (CSPs).
Each recipe maps to one workload and can be run at various cluster scales and precisions. These workloads are tested against the NVIDIA Reference Architecture and those results are provided as a baseline for comparison. These performance metrics are collected from production environments and are subject to real-world variability.
To use the Performance Recipes, make sure you have the following prerequisites installed on your cluster:
- Bash 4.2 or newer
- Git LFS
- NGC Registry Access
- NGC CLI 3.148.1 or newer (Optional, required for NIM Inference workloads)
- Python 3.12.x
- CUDA: at least 12.3, recommended: 12.8 or newer
- NV Driver: at least 535.129.03, recommended 570.172.08 or newer
- OFED: 5.9-0.5.6.0.127 or newer
- NCCL: 2.19.4 or newer
Depending on your cluster's job scheduler, ensure the following are met:
- Slurm Clusters
Important: Before proceeding with installation, please review the Known Issues section.
-
Clone the repository:
git clone https://github.com/NVIDIA/dgxc-benchmarking.git cd dgxc-benchmarking -
Set up Hugging Face access (required): Most recipes fetch model metadata (for example: tokenizer and config) from the Hugging Face Hub during installation. Unauthenticated access is heavily rate limited and commonly causes installation failures.
- Create a Hugging Face account (if you don't have one)
- Create an access token in Hugging Face settings
- Keep the Hugging Face token handy. The installer will prompt for
HF_TOKEN(ifHF_TOKENis already set in your environment, the installer will use it as the default)
Gated model access (important): Some recipes use gated Hugging Face model repositories (for example: Llama). Even with
HF_TOKEN, you must request repo access separately. Approvals are not instantaneous—request access early.See Model Access Requirements for the list of recipes that require additional approval.
-
(Optional) For NIM Inference workloads only:
- Generate an NGC API key from the NGC Registry
- Install and configure the NGC CLI:
x86
curl -L https://ngc.nvidia.com/downloads/ngccli_linux.zip -o ngccli_linux.zip unzip -q ngccli_linux.zip -d $HOME/.local/bin rm ngccli_linux.zip export PATH=$HOME/.local/bin:$PATH ngc config set
arm64
curl -L https://ngc.nvidia.com/downloads/ngccli_arm64.zip -o ngccli_arm64.zip unzip -q ngccli_arm64.zip -d $HOME/.local/bin rm ngccli_arm64.zip export PATH=$HOME/.local/bin/ngc-cli:$PATH ngc config set
-
Check cluster runtime configuration:
If you are installing on a Slurm cluster, confirm the required runtime settings before running the installer. Incorrect defaults can cause install and setup jobs to fail.
Enroot / Pyxis settings
enroot.conf
Set these values in
/etc/enroot/enroot.conf:ENROOT_ROOTFS_WRITABLE yesENROOT_REMAP_ROOT yes
environ.d
Set cluster-specific environment variables needed inside containers in
/etc/enroot/environ.d/*.env. A common issue is a missingNCCL_IB_HCA, which can cause multi-node NCCL jobs to fail or pick the wrong HCAs.Home mounts
All recipes use
--no-container-mount-hometo prevent the host environment from overriding the container environment. -
Run the installer:
Important: Installation may take several hours, influenced by selected recipes, internet speed, and your current node's resources. Consider using a tool like
tmuxorscreen.This will set up a supported Python environment (reusing your current
uv/venv/conda env if compatible, otherwise creating../llmb_venvone directory above the repo), then launch the interactive installer../install.sh
The installer will:
- Install
uv(the required package manager) if it is not already present - Set up a Python 3.12.x virtual environment (reusing your current one if compatible)
- Install the CLI tools (
llmb-run,llmb-install) - Prompt you to configure your cluster and select workloads to install
Note: For detailed installation options, workload-specific virtual environments, and troubleshooting, see the Installer README.
- Install
-
Validate your cluster configuration:
Before running your first benchmark, we recommend running the system info recipe to collect basic system information and check a few common cluster configuration issues:
cd $LLMB_INSTALL llmb-run submit -w microbenchmark_system_info --scale <num_gpus_per_node>
This recipe collects host and container diagnostics, including
lscpu, NUMA information,enroot.conf,environ.d, and a basic container startup check. -
Run a benchmark:
# Navigate to your installed workload directory cd $LLMB_INSTALL # Example: Run Llama 3.1 405B pretraining on 256 GPUs with FP8 precision llmb-run submit -w pretrain_llama3.1 -s 405b --dtype fp8 --scale 256
-
(Optional) Package results for sharing:
When you're ready to share results — for example, as part of Exemplar Cloud certification — bundle all experiment data into a single archive:
llmb-run archive
See the llmb-run README for details and options.
Enable tab completion for llmb-run commands and options:
llmb-run --install-completionRestart your shell after installation for changes to take effect.
After running the installer, the following directory structure is created:
LLMB_REPO: Directory containing the clone of the recipe repository.LLMB_INSTALL: Top-level directory for all benchmarking artifacts (images, datasets, venvs, workloads, etc).LLMB_WORKLOAD: Workload-specific directory, e.g.${LLMB_INSTALL}/workloads/pretrain_nemotron4.- Results, logs, and checkpoints are stored under subfolders of
LLMB_WORKLOAD(see below).
Example structure:
$LLMB_INSTALL/
├── images/
├── datasets/
├── venvs/
└── workloads/
└── pretrain_nemotron4/ # <- $LLMB_WORKLOAD
├── NeMo/
├── ...
└── experiments/
LLMB_REPO, LLMB_INSTALL, and LLMB_WORKLOAD are shorthand terms for directory locations; LLMB_INSTALL is the only environment variable that needs to be set by the user.
Each workload resource includes:
- Configuration details: Comprehensive software and hardware setup information.
- Performance scripts: Predefined scripts to generate and analyze performance results.
The overview page for each workload highlights target performance metrics for the specified configuration, focusing on speed measurements such as the time taken per training step and the number of tokens processed per second.
The following tables list each benchmark used to evaluate the model's performance, along with their specific configurations.
Note: The "Scale (# of GPUs)" column indicates the minimum supported scale and the maximum scale tested for each workload. The recipes may function at larger scales (unless otherwise noted in workload specific README), although they have not been explicitly validated beyond the listed maximum.
| Type | Framework | Model | Container Version | Model Size | Scale (# of GPUs) | Precision | Model Access Required | Checkpointing | Cluster Type |
|---|---|---|---|---|---|---|---|---|---|
| Pretrain | Megatron-Bridge | GPT OSS 120B | 26.02.01 | 120B | 64-512 | BF16 | No | No | Slurm |
| Pretrain | Megatron-Bridge | DeepSeek V3 | 26.02.01 | 671B | 128-512 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 405B | 256-512 | NVFP4, FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 70B | 64-512 | NVFP4, FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 8B | 8-128 | NVFP4, FP8 | Yes | Yes | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.00 | 235B | 256-512 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.00 | 30B | 8-64 | BF16 | Yes | No | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.09.00 | 15B | 16-256 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.09.00 | 340B | 128-512 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | NeMo | Grok1 | 25.09.00 | 314B | 128-512 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Nemotron-H | 26.02.01 | 56B | 32-512 | FP8 | No | No | Slurm |
| Finetune | Megatron-Bridge | Llama 3 | 26.02.01 | 70B | 8-16 | FP8, BF16 | Yes | No | Slurm |
| Microbenchmark | TRT-LLM | GPT-OSS | 1.1.0rc5 | 120B | 1-4 | MXFP4 | Yes | No | Slurm |
| Type | Framework | Model | Container Version | Model Size | Scale (# of GPUs) | Precision | Model Access Required | Checkpointing | Cluster Type |
|---|---|---|---|---|---|---|---|---|---|
| Pretrain | Megatron-Bridge | GPT OSS 120B | 26.02.01 | 120B | 64-512 | BF16 | No | No | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.09.00 | 15B | 16-256 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.07.01 | 340B | 128-512 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 405B | 256-512 | NVFP4, FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 70B | 64-512 | FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 8B | 8-128 | NVFP4, FP8 | Yes | Yes | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 235B | 256-512 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 30B | 8-64 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | DeepSeek V3 | 26.02.01 | 671B | 256-512 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | TorchTitan | DeepSeek V3 | 25.12-py3 | 671B | 256 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | NeMo | Grok1 | 25.09.00 | 314B | 128-512 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Nemotron-H | 26.02.01 | 56B | 32-512 | FP8 | No | No | Slurm |
| Finetune | Megatron-Bridge | Llama 3 | 26.02.01 | 70B | 8-16 | FP8, BF16 | Yes | No | Slurm |
| Inference | TRT-LLM | DeepSeek R1 | 1.1.0rc5 | 671B | 4 | NVFP4 | No | No | Slurm |
| Inference | Dynamo | DeepSeek R1 | 0.6.1 | 671B | 32 | NVFP4 | No | No | Slurm |
| Inference | SGLang | DeepSeek R1 | v0.5.3-cu129-gb200 | 671B | 4 | NVFP4 | No | No | Slurm |
| Inference | TRT-LLM | Llama 3.3 | 1.1.0rc5 | 70B | 1-4 | NVFP4 | Yes | No | Slurm |
| Inference | Dynamo + TRT-LLM | GPT-OSS Inference | 0.5.1-rc0.pre3 | 120B | 4+ | MXFP4 | No | No | Kubernetes |
| Inference | Dynamo + TRT-LLM | GPT-OSS | 0.5.1-rc0.pre3 | 120B | 4 | MXFP4 | No | No | Slurm |
| Microbenchmark | TRT-LLM | GPT-OSS | 1.1.0rc5 | 120B | 1-4 | MXFP4 | Yes | No | Slurm |
| Type | Framework | Model | Container Version | Model Size | Scale (# of GPUs) | Precision | Model Access Required | Checkpointing | Cluster Type |
|---|---|---|---|---|---|---|---|---|---|
| Pretrain | Megatron-Bridge | GPT OSS 120B | 26.02.01 | 120B | 64-512 | BF16 | No | No | Slurm |
| Pretrain | Megatron-Bridge | DeepSeek V3 | 26.02.01 | 671B | 128-512 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 405B | 256-512 | FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 70B | 64-512 | FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 235B | 256-512 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 30B | 8-64 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Nemotron-H | 26.02.01 | 56B | 32-512 | FP8 | No | No | Slurm |
| Finetune | Megatron-Bridge | Llama 3 | 26.02.01 | 70B | 8-16 | FP8, BF16 | Yes | No | Slurm |
| Microbenchmark | TRT-LLM | GPT-OSS | 1.1.0rc5 | 120B | 1-4 | MXFP4 | Yes | No | Slurm |
| Type | Framework | Model | Container Version | Model Size | Scale (# of GPUs) | Precision | Model Access Required | Checkpointing | Cluster Type |
|---|---|---|---|---|---|---|---|---|---|
| Pretrain | Megatron-Bridge | GPT OSS 120B | 26.02.01 | 120B | 64-512 | BF16 | No | No | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.09.00 | 15B | 16-256 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.07.01 | 340B | 128-1024 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.00 | 405B | 256-1024 | NVFP4, FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.00 | 70B | 64-1024 | NVFP4, FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.00 | 8B | 8-128 | NVFP4, FP8 | Yes | Yes | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 235B | 256-512 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 30B | 8-64 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | DeepSeek V3 | 26.02.01 | 671B | 256-512 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | TorchTitan | DeepSeek V3 | 25.12-py3 | 671B | 256 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | NeMo | Grok1 | 25.09.00 | 314B | 256-1024 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Nemotron-H | 26.02.01 | 56B | 32-512 | FP8 | No | No | Slurm |
| Finetune | Megatron-Bridge | Llama 3 | 26.02.01 | 70B | 8-16 | FP8, BF16 | Yes | No | Slurm |
| Inference | TRT-LLM | DeepSeek R1 | 1.1.0rc5 | 671B | 4 | NVFP4 | No | No | Slurm |
| Inference | Dynamo | DeepSeek R1 | 0.6.1 | 671B | 32 | NVFP4 | No | No | Slurm |
| Inference | SGLang | DeepSeek R1 | v0.5.3rc0-cu128-b200 | 671B | 8 | NVFP4 | No | No | Slurm |
| Inference | TRT-LLM | Llama 3.3 | 1.1.0rc5 | 70B | 1 | NVFP4 | Yes | No | Slurm |
| Inference | Dynamo + TRT-LLM | GPT-OSS | 0.6.1 | 120B | 4 | MXFP4 | No | No | Slurm |
| Microbenchmark | TRT-LLM | GPT-OSS | 1.1.0rc5 | 120B | 1-4 | MXFP4 | Yes | No | Slurm |
Baseline performance metrics were collected using workloads on the NVIDIA DGX H100 Reference Architecture. For more information see DGX H100 Systems.
| Type | Framework | Model | Container Version | Model Size | Scale (# of GPUs) | Precision | Model Access Required | Checkpointing | Cluster Type |
|---|---|---|---|---|---|---|---|---|---|
| Pretrain | Megatron-Bridge | GPT OSS 120B | 26.02.01 | 120B | 64-1024 | BF16 | No | No | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.09.00 | 15B | 16-256 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | NeMo | Nemotron4 | 25.09.00 | 340B | 256-2048 | FP8, BF16 | No | Yes | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 405B | 1024 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 70B | 64-1024 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Llama 3.1 | 26.02.01 | 8B | 8-128 | FP8, BF16 | Yes | Yes | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 235B | 256-512 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Qwen3 | 26.02.01 | 30B | 16-64 | BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | DeepSeek V3 | 25.09.00 | 671B | 512-1024 | FP8 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | DeepSeek V3 | 25.09.00 | 671B | 1024 | BF16 | Yes | No | Slurm |
| Pretrain | TorchTitan | DeepSeek V3 | 25.12-py3 | 671B | 512-1024 | BF16 | Yes | No | Slurm |
| Pretrain | NeMo | Grok1 | 25.09.00 | 314B | 512-2048 | FP8, BF16 | Yes | No | Slurm |
| Pretrain | Megatron-Bridge | Nemotron-H | 26.02.01 | 56B | 32-1024 | FP8 | No | No | Slurm |
| Finetune | Megatron-Bridge | Llama 3 | 26.02.01 | 70B | 8-16 | FP8, BF16 | Yes | No | Slurm |
| Inference | TRT-LLM | DeepSeek R1 | 1.1.0rc5 | 671B | 16 | FP8 | No | No | Slurm |
| Inference | Dynamo | DeepSeek R1 | 0.6.1 | 671B | 48 | FP8 | No | No | Slurm |
| Inference | TRT-LLM | Llama 3.3 | 1.1.0rc5 | 70B | 2 | FP8 | Yes | No | Slurm |
| Microbenchmark | TRT-LLM | GPT-OSS | 1.1.0rc5 | 120B | 1-4 | MXFP4 | Yes | No | Slurm |
| Type | Framework | Model | Container Version | Model Size | Scale (# of GPUs) | Precision | Model Access Required | Checkpointing | Cluster Type | Last Version |
|---|---|---|---|---|---|---|---|---|---|---|
| Finetuning | HF | Llama 2 | 24.02-py3 | 70B | 8-512 | FP8, BF16 | Yes | No | Slurm | 25.01.1 |
| Finetuning | HF | Mistral | 24.02-py3 | 7B | 8-256 | FP8, BF16 | Yes | No | Slurm | 25.01.1 |
| Pretrain | Jax | Llama 2 | jax:maxtext-2024-12-09 | 70B | 128-2048 | FP8, BF16 | No | No | Slurm | 25.01.1 |
| Pretrain | Jax | GPT3 | jax:pax-2024-03-04 | 175B | 128-2048 | FP8, BF16 | No | No | Slurm | 25.01.1 |
| Pretrain | Maxtext | Llama3 | 25.01 | 70B | 128-2048 | FP8, BF16 | No | No | Slurm | 25.04.02 |
| Pretrain | NeMo | GPT3 | 24.12 | 175B | 128-2048 | FP8, BF16 | No | No | Slurm | 25.04.02 |
| Pretrain | NeMo | Llama4 Maverick | 25.07.01 | 400B | 512-2048 | FP8, BF16 | Yes | No | Slurm | 25.08 |
| Fine-Tuning (SFT, LORA) | NeMo | Llama 3 | 24.12 | 8B, 70B | 8-32 | FP8, BF16 | Yes | No | Slurm | 25.04.02 |
| Finetune | NeMo Maverick | Llama4 | 25.07.01 | 400B | 256 | FP8, BF16 | Yes | No | Slurm | 25.08 |
| Inference | NIM | Llama 3 | 1.0.3 | 70B | 4 | FP8 | Yes | No | Slurm | 25.05.04 |
| Inference | NIM, SGLang | DeepSeek R1 | 1.7.2 | 671B | 16 | FP8 | No | No | Slurm | 25.08 |
| Inference | NIM & NeMo Retriever (NVIDIA Enterprise RAG) | Llama 3.1 and 3.2 | instruct:1.3.3, rerank:1.3, embed:1.3.1 | 70b, 1b | 1-8 | N/A | Yes | No | Slurm | 25.08 |
| Inference | TRT-LLM | Llama 4 | 1.0.0rc1 | 17b | 8 | FP8 | Yes | No | Slurm | 25.08 |
Most recipes require a Hugging Face account and HF_TOKEN to fetch model metadata (tokenizer/config) from the Hugging Face Hub without running into strict unauthenticated rate limits.
Some recipes additionally require approval for gated model repositories. In those cases, the token is necessary but not sufficient — your Hugging Face account must also be granted access to the model repo.
Note: approval processes are not immediate and may take some time.
| Recipe Type | Recipe Name | HF Token Required | Additional Approval Required | Details/Link for Approval |
|---|---|---|---|---|
| Pretrain | GPT OSS 120B | Yes | No | HuggingFace GPT OSS 120B |
| Pretrain | Llama 3.1 | Yes | Yes | HuggingFace Llama 3.1 |
| Pretrain | DeepSeek V3 | Yes | No | N/A |
| Pretrain | Grok1 | Yes | Yes | Grok1 recipe uses the HuggingFace Llama 3 tokenizer |
| Pretrain | Nemotron4 | Yes | No | N/A |
| Pretrain | Qwen3 235B | Yes | No | HuggingFace Qwen3 235B |
| Pretrain | Qwen3 30B | Yes | No | HuggingFace Qwen3 30B |
| Pretrain | Nemotron-H | No | No | N/A |
| Finetune | Llama 3 | Yes | Yes | HuggingFace Llama 3 70B |
| Inference | Llama 3.3 | Yes | Yes | HuggingFace Llama 3.3 70B Instruct |
| Inference | DeepSeek R1 | Yes | No | N/A |
| Inference | GPT-OSS | Yes | No | HuggingFace GPT OSS 120B |
| Microbenchmark | CPU overhead | Yes | No | HuggingFace GPT-OSS-120B |
The LLM Benchmarking Collection published baseline benchmark results using the following reference infrastructures, CSP-specific configurations, and software.
The following table shows the peak theoretical throughput (in TFLOPS) for different GPU types and data types. These values represent the maximum computational capacity of each GPU architecture and are used for calculating Model FLOPS Utilization (MFU) in performance analysis.
| Data Type | GB300 | GB200 | B300 | B200 | H100 |
|---|---|---|---|---|---|
| BF16 | 2450 | 2450 | 2250 | 2250 | 989 |
| FP8 | 4900 | 4900 | 4500 | 4500 | 1979 |
| NVFP4 | 14700 | 9800 | 13500 | 9000 | - |
Note: These peak theoretical throughput values are based on non-sparse specifications and referenced throughout individual recipe README files for MFU calculations and performance analysis. NVFP4 precision is not supported on Hopper architecture (H100).
Baseline performance metrics for GB300 workloads were collected using systems equipped with the NVIDIA GB300 Grace Blackwell Superchip. For more information see NVIDIA Blackwell Platform.
- GB300 Grace Blackwell Superchip
- CPU: 72 Arm Neoverse V2 cores with 4x 128b SVE2
- 3.5 GHz (max boost)
- Low-latency coherent interconnect between Grace CPU and B300 GPUs
- RAM: 960 GiB LPDDR5X (2x 480 GiB) | 546 GB/s
- Total Accessible Memory: 2 TiB
- 64x PCIe Gen5 lanes
- 2x B300 GPUs
- 279 GB HBM3e per GPU
- TDP configurable up to 1,400 W
- Memory bandwidth 8 TB/s per GPU
- CPU: 72 Arm Neoverse V2 cores with 4x 128b SVE2
- NVLink: NVLink 5th Generation
- 1.8 TB/s per GPU bandwidth
- System Memory: Coherent memory architecture between Grace CPU and Blackwell GPUs
Baseline performance metrics for GB200 workloads were collected using the NVIDIA GB200 NVL72 Reference Architecture. For more information see NVIDIA GB200 NVL72
- GB200 Grace Blackwell Superchip
- CPU: 72 Arm Neoverse V2 cores with 4x 128b SVE2
- 3.5 GHz (max boost)
- Low-latency coherent interconnect between Grace CPU and B200 GPUs
- RAM: 960 GiB LPDDR5X (2x 480 GiB) | 546 GB/s
- Total Accessible Memory: 1.7 TiB
- 64x PCIe Gen5 lanes
- 2x B200 GPUs
- 186 GB HBM3e per GPU
- Memory bandwidth 8 TB/s per GPU
- CPU: 72 Arm Neoverse V2 cores with 4x 128b SVE2
- NVLink: NVLink 5th Generation
- 1.8 TB/s per GPU bandwidth
Baseline performance metrics for B300 workloads were collected using systems equipped with NVIDIA B300 GPUs. For more information see NVIDIA DGX B300.
- GPU: 8xB300 270 GB HBM3e (2.1 TB total)
- TDP 1100W
- Memory bandwidth 7.7 TB/s per GPU
- CPU: Intel Xeon 6776P x2
- 64 cores per socket
- 3.9 GHz (max turbo) / 4.6 GHz (priority core turbo, up to 8 cores)
- RAM: 2 TB DDR5
- PCIe Gen5
- NVLink: NVLink 5th Generation
- 1.8 TB/s per GPU bandwidth
- SpectrumX:
- Compute links: 8x 800 Gbit/s
- System Memory: 2TB
- Local Storage:
- 2x 1.9TB NVMe M.2
- 8x 3.84TB NVMe E1.S
Baseline performance metrics for B200 workloads were collected using systems equipped with NVIDIA B200 GPUs. For more information see NVIDIA Blackwell Architecture.
- GPU: 8xB200 180 GB HBM3e (1.4 TB total)
- TDP 1000W
- Memory bandwidth 7.7 TB/s per GPU
- CPU: Intel Xeon Platinum 8570 x2
- 56 cores per socket
- 4 GHz (max boost)
- RAM: 1 TiB | 1.6 TB/s per socket
- 48x PCIe Gen5 lanes
- NVLink: NVLink 5th Generation
- 1.8 TB/s per GPU bandwidth
- 18 Links per GPU
- InfiniBand:
- Compute links: 8x 400 Gbit/s
- System Memory: 2TB
Baseline performance metrics for H100 workloads were collected using the NVIDIA DGX H100 Reference Architecture. For more information see DGX H100 Systems.
- GPU: 8xH100 80 GB HBM3 (640 GB total)
- TDP 700W
- Memory bandwidth 3.2 TB/s per GPU
- CPU: 2x Intel Sapphire Rapids, Intel(R) Xeon(R) Platinum 8480C
- 112 cores (56 cores per CPU)
- 2.00 GHz (Base), 3.8 GHz (Max boost)
- Numa nodes per socket = 1
- PCIe Gen5
- NVLink: NVLink 4th Generation
- 900 GB/s per GPU bandwidth
- 18 Links per GPU
- InfiniBand:
- Compute links: 8x 400 Gbit/s
- Storage links: 2x 400 Gbit/s
- System Memory: 2TB
- Local Storage:
- 2x 1.92TB NVMe M.2
- 8x 3.84TB NVMe U.2
AI platforms may vary in implementation, such as differences in network fabric and virtualization implementations, and thus require different tuning. For optimal performance, users should leverage the correct implementation for their platform. The example platform-specific tuning is provided as a starting point. Further tuning may be necessary if instance type varies from the Reference Architecture.
For NeMo based images EFA support is already included starting with version 25.02 (nvcr.io/nvidia/nemo:25.02).
For other images or if you need to update Enable Elastic Fabric Adapter (EFA) follow the step-by-step guide. Use the reference NCCL tests Dockerfile with EFA support.
Ensure that all required pre-conditions for GCP cluster deployment have been met.
Configure Compute Fabric with TCP-X by ensuring the following environment variables are set and present for your environment.
NCCL_LIB_DIR='/var/lib/tcpxo/lib64' source /var/lib/tcpxo/lib64/nccl-env-profile.sh; \
export NCCL_FASTRAK_CTRL_DEV=enp0s12; \
export NCCL_FASTRAK_IFNAME=enp6s0,enp7s0,enp13s0,enp14s0,enp134s0,enp135s0,enp141s0,enp142s0; \
export NCCL_SOCKET_IFNAME=enp0s12; \
export NCCL_FASTRAK_LLCM_DEVICE_DIRECTORY=/dev/aperture_devices; \
export NCCL_NET=FasTrak; \
ls /var/lib/tcpxo/lib64;"Important:
- The above example hasn't been tested with the latest TCP-X version. Check with your cluster admin for the most recent instructions.
- If additional files need to be mounted into running container, they should be placed under
$LLMB_WORKLOADfolder as this location is already mounted.
Requires two settings for optimal performance:
- NCCL_TOPO_FILE=
<path to topo file under $LLMB_WORKLOAD>.- The VM topology files ensure that the correct CPUs, GPUs and NICs are bound together. Location of this file varies, it must be mounted into the container.
- Important: Place NCCL Topology file under
$LLMB_WORKLOADfolder as this location is already mounted into running container.
- NCCL_P2P_CHUNKSIZE=2097152
- Increasing message size for NCCL send/recv for optimal performance
Example Configuration for a training recipe:
export NCCL_TOPO_FILE=$LLMB_WORKLOAD/nvd5-topo.xml # Exact location varies by cluster
export NCCL_P2P_NET_CHUNKSIZE=2097152For the latest updates, improvements, and breaking changes, see the CHANGELOG.
Contains synopsis and resolution for known issues
Large scale pre-training run logs contain message like below:
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 18 [2]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 18 [4]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 19 [2]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 19 [4]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 22 [2]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 22 [4]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 23 [2]: expecting 1 got 0
[userbuffers.cu:userbuffers_fp16_sum_inplace_gpu_rr_rs_oop_fp8:797] [6] Reduce-scatter: SM 23 [4]: expecting 1 got 0
These usually mean that one of the GPUs is hanging. Possible resolutions:
- re-running the job on a different set of nodes
- rebooting affected nodes.
A Slurm job failed during benchmark run. E.g., a nemotron benchmark job with ID=2041792 failed
sacct -j 2041792
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
2041792 launch.sh batch test 224 FAILED 1:0
2041792.bat+ batch test 224 FAILED 1:0
2041792.ext+ extern test 224 COMPLETED 0:0
2041792.0 bash test 224 FAILED 1:0
You can find log files associated with this run under $LLMB_WORKLOAD/experiments/pretrain_nemotron4_<size>_<dtype>_<scale>_<config> folder. The folder will have subfolders that will contain log-account.pretrain_nemotron4_<size>_<dtype>_<scale>_<config>.out files with a root cause error message.
E.g., for the job failure above and assuming the nemotron 15b job ran on 16 GPUs, used version 25.05, and with precision bf16 the path will be under $LLMB_WORKLOAD/experiments/pretrain_nemotron4_15b_bf16_gpus16_tp1_pp1_cp1_vp1_mbs2_gbs64/...
Search for errors in the log-account.pretrain_nemotron4_15b_bf16_gpus16_tp1_pp1_cp1_vp1_mbs2_gbs64_3358926_0.out file.
If a benchmark requires virtual python environment (venv) but virtualenv executable isn't available on the login node and/or login nodes cannot be updated by non-sudo users, you would see errors like below when trying to setup venv
bash-5.2$ virtualenv
bash: virtualenv: command not foundThere are alternative virtual environment options available like conda.
To install and activate conda virtual environment
# pick INSTALL_PATH with sufficient disk space
INSTALL_PATH=~
wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O $INSTALL_PATH/miniconda.sh
bash $INSTALL_PATH/miniconda.sh -b -p $INSTALL_PATH/miniconda3
$INSTALL_PATH/miniconda3/bin/conda init
source ~/.bashrcWhen you are finished running this benchmark you can deactivate the environment, run this command
conda deactivateSome recipes set NCCL_IB_QPS_PER_CONNECTION=4 by default. This controls the number of InfiniBand queue pairs NCCL uses per connection and can improve multi-node communication performance on certain cluster configurations.
If you need to set or override this value, there are two options:
Option A — Add it to the environment section of your cluster_config.yaml (applies to all jobs launched from that installation):
environment:
NCCL_IB_QPS_PER_CONNECTION: 4Option B — Pass it inline when submitting a single job:
NCCL_IB_QPS_PER_CONNECTION=4 llmb-run submit -w <workload> -s <size> --dtype <precision> --scale <number>Note: The optimal value may vary by cluster and workload. If you experience communication errors or degraded performance after changing this setting, try removing it or adjusting the value.
The pretrain_llama3.1 workload is the user-facing recipe for 8B, 70B, and 405B. Internally, the 8B and 70B sizes reuse existing Megatron-Bridge llama3 configs instead of duplicating them under a separate llama3.1 name. As a result, setup output for 8B/70B may show Meta-Llama-3-*, and experiment or log names may use the pretrain_llama3 prefix. This is expected and does not mean the wrong workload or model size was selected.
The llmb-install tool currently supports only one GPU type per installation. If your cluster contains multiple GPU types (e.g., H100 and B200), you cannot install workloads for both GPU types in a single installation.
Create separate installations for each GPU type:
-
Run the installer once for your first GPU type (e.g., H100):
./install.sh # Select H100 workloads and specify an installation directory -
Run the installer again for your second GPU type (e.g., B200):
./install.sh # Select B200 workloads and specify a different installation directory
Each installation will have its own LLMB_INSTALL directory. Use the appropriate LLMB_INSTALL directory for running workloads for each GPU type.
Some workloads complete all timesteps but print errors during the cleanup phase. This previously caused the Slurm job to be marked as failed.
We now detect this case and convert the exit code so Slurm reports success when the run actually finished. Log files will still contain the cleanup errors. If the job completed all timesteps and Slurm shows COMPLETED, you can ignore cleanup errors in the logs. This will be fixed in a future release.
Nearly every recipe installs nemo_run and will fail with uv 0.9.29+ due to uv rejecting unknown fields in pyproject.toml files.
Run ./install.sh from this release. It enforces uv <=0.9.28, which avoids the strict parser breakage.
The nvcr.io/nvidia/nemo:26.02.00 container ships a bundled rdma-core (/opt/rdma-core/build/lib/) that conflicts with the container's own EFA libraries, causing NCCL to fall back to Socket transport. This issue is fixed in the NeMo 26.02.01 container.
LLMB 26.02.01 uses NeMo 26.02.01 for most recipes, but a small number of recipe/GPU combinations are still pinned to NeMo 26.02.00 and require the workaround below.
Affected recipes in this release:
- Llama 3.1 on B200
- Qwen3 on GB300
See Megatron-Bridge #2824 for details.
This workaround applies only to affected recipes using the nvcr.io/nvidia/nemo:26.02.00 container. The workload tables list the container version for each recipe/GPU combination; the installed recipe's launch.sh is the source of truth if the table and script differ.
Create a patched container image by removing the conflicting library directory:
srun -N1 --container-image=$LLMB_INSTALL/images/nvidia+nemo+26.02.00.sqsh \
--container-save=$LLMB_INSTALL/images/nvidia+nemo+26.02.00-efa-fix.sqsh \
--pty /bin/bash
# Inside the container:
rm -rf /opt/rdma-core/build/lib/
ldconfig
exitThen update the affected recipe's launch.sh under your install directory ($LLMB_INSTALL/llmb_repo/**/launch.sh, not the source repo) to use the patched image:
# Before:
export IMAGE=${RUN_CONF_IMAGE:-$LLMB_INSTALL/images/nvidia+nemo+$FW_VERSION.sqsh}
# After:
export IMAGE=${RUN_CONF_IMAGE:-$LLMB_INSTALL/images/nvidia+nemo+26.02.00-efa-fix.sqsh}The NeMo 26.02.01 container fixes the NeMo 26.02.00 EFA library conflict above. The following limitations still apply when running these recipes on AWS EFA clusters:
- DeepSeek V3 Megatron-Bridge on H100: Not supported on EFA. The H100 recipe uses NeMo
25.09.00and still has NVSHMEM/EFA initialization issues. - DeepSeek V3 TorchTitan: Not validated on EFA. The recipe uses PyTorch
25.12-py3and has unresolved NVSHMEM/EFA issues. - Qwen3 30B on H100: Not supported on EFA. The H100 configuration uses EP=16, which requires expert-parallel communication between nodes over EFA and exposes the Megatron-Bridge EP communication issue tracked in Megatron-Bridge #3343.
- Grok1 and Nemotron4: EFA failures have been observed with the older NeMo containers used by these recipes (
25.09.00or25.07.01, depending on GPU type). If EFA failures occur, update the container with current NCCL, EFA, and AWS OFI NCCL packages. See the AWS CSP section for EFA update references. - Qwen3 235B: Supported on GB300/GB200 systems. H100 EFA is not validated in this release.
The current Megatron-Bridge launch configuration does not include the fixed-core CPU binding (-C $((SLURM_LOCALID * 16)),...) used on the B300 reference configuration. Instead, it binds processes at the NUMA-node level only.
This is intentional as the general default: on B300 systems where Intel Granite Rapids (GNR) PCT is not available or not enabled, forcing this stricter binding can hurt performance or break recipes. However, on the small subset of GNR processors that support PCT, and only when PCT is enabled, restoring this fixed-core binding can provide the best performance for recipes like Qwen.
We refer to this as the "B300" pinning configuration because it matches the B300 reference configuration, but it is a CPU-platform-specific optimization rather than a B300 GPU feature.
A patch file is provided at qwen3/pretrain/b300_numa_cpu_pinning.patch to restore this fixed-core binding. Apply it only if PCT is available and enabled on your system; in that case it will likely provide the best performance for recipes like Qwen. Do not apply it on systems without PCT.
The example below patches the Qwen3 pretrain workload only. Each workload has its own Megatron-Bridge checkout, so if you want the same change for another recipe you must apply an equivalent patch in that workload's Megatron-Bridge directory as well.
Apply the Qwen3 patch from the root of that workload's Megatron-Bridge installation:
cd $LLMB_INSTALL/workloads/pretrain_qwen3/Megatron-Bridge
git apply $LLMB_INSTALL/llmb_repo/qwen3/pretrain/b300_numa_cpu_pinning.patchTerminology used in these recipes is explained in the Appendix.
For questions or to provide feedback, please contact LLMBenchmarks@nvidia.com