TensorRT-LLM
LLM Deployment using TensorRT-LLM
This directory contains examples and reference implementations for deploying Large Language Models (LLMs) in various configurations using TensorRT-LLM.
Use the Latest Release
We recommend using the latest stable release of dynamo to avoid breaking changes:
You can find the latest release here and check out the corresponding branch with:
Table of Contents
- Feature Support Matrix
- Quick Start
- Single Node Examples
- Advanced Examples
- KV Cache Transfer
- Client
- Benchmarking
- Multimodal Support
- Video Diffusion Support
- Logits Processing
- DP Rank Routing
- Performance Sweep
- Known Issues and Mitigations
Feature Support Matrix
Core Dynamo Features
Large Scale P/D and WideEP Features
TensorRT-LLM Quick Start
Below we provide a guide that lets you run all of our the common deployment patterns on a single node.
Start Infrastructure Services (Local Development Only)
For local/bare-metal development, start etcd and optionally NATS using Docker Compose:
- etcd is optional but is the default local discovery backend. You can also use
--discovery-backend fileto use file system based discovery. - NATS is optional - only needed if using KV routing with events. Workers must be explicitly configured to publish events. Use
--no-router-kv-eventson the frontend for prediction-based routing without events - On Kubernetes, neither is required when using the Dynamo operator, which explicitly sets
DYN_DISCOVERY_BACKEND=kubernetesto enable native K8s service discovery (DynamoWorkerMetadata CRD)
Build container
Run container
Single Node Examples
Below we provide some simple shell scripts that run the components for each configuration. Each shell script is simply running the python3 -m dynamo.frontend <args> to start up the ingress and using python3 -m dynamo.trtllm <args> to start up the workers. You can easily take each command and run them in separate terminals.
For detailed information about the architecture and how KV-aware routing works, see the Router Guide.
Aggregated
Aggregated with KV Routing
Disaggregated
Disaggregated with KV Routing
Aggregated with Multi-Token Prediction (MTP) and DeepSeek R1
Notes:
- There is a noticeable latency for the first two inference requests. Please send warm-up requests before starting the benchmark.
- MTP performance may vary depending on the acceptance rate of predicted tokens, which is dependent on the dataset or queries used while benchmarking. Additionally,
ignore_eosshould generally be omitted or set tofalsewhen using MTP to avoid speculating garbage outputs and getting unrealistic acceptance rates.
Advanced Examples
Below we provide a selected list of advanced examples. Please open up an issue if you’d like to see a specific example!
Multinode Deployment
For comprehensive instructions on multinode serving, see the multinode-examples.md guide. It provides step-by-step deployment examples and configuration tips for running Dynamo with TensorRT-LLM across multiple nodes. While the walkthrough uses DeepSeek-R1 as the model, you can easily adapt the process for any supported model by updating the relevant configuration files. You can see Llama4+eagle guide to learn how to use these scripts when a single worker fits on the single node.
Speculative Decoding
Kubernetes Deployment
For complete Kubernetes deployment instructions, configurations, and troubleshooting, see TensorRT-LLM Kubernetes Deployment Guide.
Client
See client section to learn how to send request to the deployment.
NOTE: To send a request to a multi-node deployment, target the node which is running python3 -m dynamo.frontend <args>.
Benchmarking
To benchmark your deployment with AIPerf, see this utility script, configuring the
model name and host based on your deployment: perf.sh
KV Cache Transfer in Disaggregated Serving
Dynamo with TensorRT-LLM supports two methods for transferring KV cache in disaggregated serving: UCX (default) and NIXL (experimental). For detailed information and configuration instructions for each method, see the KV cache transfer guide.
Request Migration
Dynamo supports request migration to handle worker failures gracefully. When enabled, requests can be automatically migrated to healthy workers if a worker fails mid-generation. See the Request Migration Architecture documentation for configuration details.
Request Cancellation
When a user cancels a request (e.g., by disconnecting from the frontend), the request is automatically cancelled across all workers, freeing compute resources for other requests.
Cancellation Support Matrix
For more details, see the Request Cancellation Architecture documentation.
Client
See client section to learn how to send request to the deployment.
NOTE: To send a request to a multi-node deployment, target the node which is running python3 -m dynamo.frontend <args>.
Benchmarking
To benchmark your deployment with AIPerf, see this utility script, configuring the
model name and host based on your deployment: perf.sh
Multimodal support
Dynamo with the TensorRT-LLM backend supports multimodal models, enabling you to process both text and images (or pre-computed embeddings) in a single request. For detailed setup instructions, example requests, and best practices, see the TensorRT-LLM Multimodal Guide.
Video Diffusion Support (Experimental)
Dynamo supports video generation using diffusion models through the --modality video_diffusion flag.
Requirements
- visual_gen: Part of TensorRT-LLM, located at
tensorrt_llm/visual_gen/. Currently available only on thefeat/visual_genbranch (not yet merged to main or any release). Install from source: - dynamo-runtime with video API: The Dynamo runtime must include
ModelType.Videossupport. Ensure you’re using a compatible version.
Supported Models
The pipeline type is auto-detected from the model’s model_index.json — no --model-type flag is needed.
Quick Start
API Endpoint
Video generation uses the /v1/videos endpoint:
Configuration Options
Limitations
- Video diffusion is experimental and not recommended for production use
- Only text-to-video is supported in this release (image-to-video planned)
- Requires GPU with sufficient VRAM for the diffusion model
Logits Processing
Logits processors let you modify the next-token logits at every decoding step (e.g., to apply custom constraints or sampling transforms). Dynamo provides a backend-agnostic interface and an adapter for TensorRT-LLM so you can plug in custom processors.
How it works
- Interface: Implement
dynamo.logits_processing.BaseLogitsProcessorwhich defines__call__(input_ids, logits)and modifieslogitsin-place. - TRT-LLM adapter: Use
dynamo.trtllm.logits_processing.adapter.create_trtllm_adapters(...)to convert Dynamo processors into TRT-LLM-compatible processors and assign them toSamplingParams.logits_processor. - Examples: See example processors in
lib/bindings/python/src/dynamo/logits_processing/examples/(temperature, hello_world).
Quick test: HelloWorld processor
You can enable a test-only processor that forces the model to respond with “Hello world!”. This is useful to verify the wiring without modifying your model or engine code.
Notes:
- When enabled, Dynamo initializes the tokenizer so the HelloWorld processor can map text to token IDs.
- Expected chat response contains “Hello world”.
Bring your own processor
Implement a processor by conforming to BaseLogitsProcessor and modify logits in-place. For example, temperature scaling:
Wire it into TRT-LLM by adapting and attaching to SamplingParams:
Current limitations
- Per-request processing only (batch size must be 1); beam width > 1 is not supported.
- Processors must modify logits in-place and not return a new tensor.
- If your processor needs tokenization, ensure the tokenizer is initialized (do not skip tokenizer init).
DP Rank Routing (Attention Data Parallelism)
TensorRT-LLM supports attention data parallelism (attention DP) for models like DeepSeek. When enabled, multiple attention DP ranks run within a single worker, each with its own KV cache. Dynamo can route requests to specific DP ranks based on KV cache state.
Dynamo vs TRT-LLM Internal Routing
- Dynamo DP Rank Routing: The router selects the optimal DP rank based on KV cache overlap and instructs TRT-LLM to use that rank with strict routing (
attention_dp_relax=False). Use this with--router-mode kvfor cache-aware routing. - TRT-LLM Internal Routing: TRT-LLM’s scheduler assigns DP ranks internally. Use this with
--router-mode round-robinorrandomwhen KV-aware routing isn’t needed.
Enabling DP Rank Routing
The --enable-attention-dp flag sets attention_dp_size = tensor_parallel_size and configures Dynamo to publish KV events per DP rank. The router automatically creates routing targets for each (worker_id, dp_rank) combination.
Performance Sweep
For detailed instructions on running comprehensive performance sweeps across both aggregated and disaggregated serving configurations, see the TensorRT-LLM Benchmark Scripts for DeepSeek R1 model. This guide covers recommended benchmarking setups, usage of provided scripts, and best practices for evaluating system performance.
Dynamo KV Block Manager Integration
Dynamo with TensorRT-LLM currently supports integration with the Dynamo KV Block Manager. This integration can significantly reduce time-to-first-token (TTFT) latency, particularly in usage patterns such as multi-turn conversations and repeated long-context requests.
Here is the instruction: Running KVBM in TensorRT-LLM .
Known Issues and Mitigations
KV Cache Exhaustion Causing Worker Deadlock (Disaggregated Serving)
Issue: In disaggregated serving mode, TensorRT-LLM workers can become stuck and unresponsive after sustained high-load traffic. Once in this state, workers require a pod/process restart to recover.
Symptoms:
- Workers function normally initially but hang after heavy load testing
- Inference requests get stuck and eventually timeout
- Logs show warnings:
num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, may not have enough kvCache - Error logs may contain:
asyncio.exceptions.InvalidStateError: invalid state
Root Cause: When max_tokens_in_buffer in the cache transceiver config is smaller than the maximum input sequence length (ISL) being processed, KV cache exhaustion can occur under heavy load. This causes context transfers to timeout, leaving workers stuck waiting for phantom transfers and entering an irrecoverable deadlock state.
Mitigation: Ensure max_tokens_in_buffer exceeds your maximum expected input sequence length. Update your engine configuration files (e.g., prefill.yaml and decode.yaml):
For example, see examples/backends/trtllm/engine_configs/gpt-oss-120b/prefill.yaml.
Related Issue: #4327