Running gpt-oss-120b Disaggregated with TensorRT-LLM
Dynamo supports disaggregated serving of gpt-oss-120b with TensorRT-LLM. This guide demonstrates how to deploy gpt-oss-120b using disaggregated prefill/decode serving on a single B200 node with 8 GPUs, running 1 prefill worker on 4 GPUs and 1 decode worker on 4 GPUs.
Overview
This deployment uses disaggregated serving in TensorRT-LLM where:
- Prefill Worker: Processes input prompts efficiently using 4 GPUs with tensor parallelism
- Decode Worker: Generates output tokens using 4 GPUs, optimized for token generation throughput
- Frontend: Provides OpenAI-compatible API endpoint with round-robin routing
The disaggregated approach optimizes for both low-latency (maximizing tokens per second per user) and high-throughput (maximizing total tokens per GPU per second) use cases by separating the compute-intensive prefill phase from the memory-bound decode phase.
Prerequisites
- 1x NVIDIA B200 node with 8 GPUs (this guide focuses on single-node B200 deployment)
- CUDA Toolkit 12.8 or later
- Docker with NVIDIA Container Toolkit installed
- Fast SSD storage for model weights (~240GB required)
- HuggingFace account and access token
- HuggingFace CLI
Ensure that the etcd and nats services are running with the following command:
Instructions
1. Download the Model
2. Run the Container
Set the container image:
Launch the Dynamo TensorRT-LLM container with the necessary configurations:
This command:
- Automatically removes the container when stopped (
--rm) - Allows container to interact with host’s IPC resources for optimal performance (
--ipc=host) - Runs the container in interactive mode (
-it) - Sets up shared memory and stack limits for optimal performance
- Mounts your model directory into the container at
/model - Mounts the current Dynamo workspace into the container at
/workspace/dynamo - Enables PDL and disables parallel weight loading
- Sets HuggingFace token as environment variable in the container
3. Understanding the Configuration
The deployment uses configuration files and command-line arguments to control behavior:
Configuration Files
Prefill Configuration (examples/backends/trtllm/engine_configs/gpt-oss-120b/prefill.yaml):
enable_attention_dp: false- Attention data parallelism disabled for prefillenable_chunked_prefill: true- Enables efficient chunked prefill processingmoe_config.backend: CUTLASS- Uses optimized CUTLASS kernels for MoE layerscache_transceiver_config.backend: ucx- Uses UCX for efficient KV cache transfercuda_graph_config.max_batch_size: 32- Maximum batch size for CUDA graphs
Decode Configuration (examples/backends/trtllm/engine_configs/gpt-oss-120b/decode.yaml):
enable_attention_dp: true- Attention data parallelism enabled for decodedisable_overlap_scheduler: false- Enables overlapping for decode efficiencymoe_config.backend: CUTLASS- Uses optimized CUTLASS kernels for MoE layerscache_transceiver_config.backend: ucx- Uses UCX for efficient KV cache transfercuda_graph_config.max_batch_size: 128- Maximum batch size for CUDA graphs
Command-Line Arguments
Both workers receive these key arguments:
--tensor-parallel-size 4- Uses 4 GPUs for tensor parallelism--expert-parallel-size 4- Expert parallelism across 4 GPUs--free-gpu-memory-fraction 0.9- Allocates 90% of GPU memory
Prefill-specific arguments:
--max-num-tokens 20000- Maximum tokens for prefill processing--max-batch-size 32- Maximum batch size for prefill
Decode-specific arguments:
--max-num-tokens 16384- Maximum tokens for decode processing--max-batch-size 128- Maximum batch size for decode
4. Launch the Deployment
Note that GPT-OSS is a reasoning model with tool calling support. To ensure the response is being processed correctly, the worker should be launched with proper --dyn-reasoning-parser and --dyn-tool-call-parser.
You can use the provided launch script or run the components manually:
Option A: Using the Launch Script
Option B: Manual Launch
- Start frontend:
- Launch prefill worker:
- Launch decode worker:
6. Verify the Deployment is Ready
Poll the /health endpoint to verify that both the prefill and decode worker endpoints have started:
Make sure that both of the endpoints are available before sending an inference request:
If only one worker endpoint is listed, the other may still be starting up. Monitor the worker logs to track startup progress.
7. Test the Deployment
Send a test request to verify the deployment:
The server exposes a standard OpenAI-compatible API endpoint that accepts JSON requests. You can adjust parameters like max_tokens, temperature, and others according to your needs.
8. Reasoning and Tool Calling
Dynamo has supported reasoning and tool calling in OpenAI Chat Completion endpoint. A typical workflow for application built on top of Dynamo is that the application has a set of tools to aid the assistant provide accurate answer, and it is ususally multi-turn as it involves tool selection and generation based on the tool result.
In addition, the reasoning effort can be configured through chat_template_args. Increasing the reasoning effort makes the model more accurate but also slower. It supports three levels: low, medium, and high.
Below is an example of sending multi-round requests to complete a user query with reasoning and tool calling: Application setup (pseudocode)
First request with tools
First response with tool choice
Second request with tool calling result
Second response with final message
Benchmarking
Performance Testing with AIPerf
The Dynamo container includes AIPerf, NVIDIA’s tool for benchmarking generative AI models. This tool helps measure throughput, latency, and other performance metrics for your deployment.
Run the following benchmark from inside the container (after completing the deployment steps above):
What This Benchmark Does
This command:
- Tests chat completions with streaming responses against the disaggregated deployment
- Simulates high load with 256 concurrent requests and 6144 total requests
- Uses long context inputs (32K tokens) to test prefill performance
- Generates consistent outputs (256 tokens) to measure decode throughput
- Includes warmup period (1000 requests) to stabilize performance metrics
- Saves detailed results to
/tmp/benchmark-resultsfor analysis
Key parameters you can adjust:
--concurrency: Number of simultaneous requests (impacts GPU utilization)--synthetic-input-tokens-mean: Average input length (tests prefill capacity)--output-tokens-mean: Average output length (tests decode throughput)--request-count: Total number of requests for the benchmark
Installing AIPerf Outside the Container
If you prefer to run benchmarks from outside the container:
Architecture Overview
The disaggregated architecture separates prefill and decode phases:
Key Features
- Disaggregated Serving: Separates compute-intensive prefill from memory-bound decode operations
- Optimized Resource Usage: Different parallelism strategies for prefill vs decode
- Scalable Architecture: Easy to adjust worker counts based on workload
- TensorRT-LLM Optimizations: Leverages TensorRT-LLM’s efficient kernels and memory management
Troubleshooting
Common Issues
-
CUDA Out-of-Memory Errors
- Reduce
--max-num-tokensin the launch commands (currently 20000 for prefill, 16384 for decode) - Lower
--free-gpu-memory-fractionfrom 0.9 to 0.8 or 0.7 - Ensure model checkpoints are compatible with the expected format
- Reduce
-
Workers Not Connecting
- Ensure etcd and NATS services are running:
docker ps | grep -E "(etcd|nats)" - Check network connectivity between containers
- Verify CUDA_VISIBLE_DEVICES settings match your GPU configuration
- Check that no other processes are using the assigned GPUs
- Ensure etcd and NATS services are running:
-
Performance Issues
- Monitor GPU utilization with
nvidia-smiwhile the deployment is running - Check worker logs for bottlenecks or errors
- Ensure that batch sizes in manual commands match those in configuration files
- Adjust chunked prefill settings based on your workload
- For connection issues, ensure port 8000 is not being used by another application
- Monitor GPU utilization with
-
Container Startup Issues
- Verify that the NVIDIA Container Toolkit is properly installed
- Check Docker daemon is running with GPU support
- Ensure sufficient disk space for model weights and container images
Next Steps
- Production Deployment: For multi-node deployments, see the Multi-node Guide
- Advanced Configuration: Explore TensorRT-LLM engine building options for further optimization
- Monitoring: Set up Prometheus and Grafana for production monitoring
- Performance Benchmarking: Use AIPerf to measure and optimize your deployment performance