Router Examples
For quick start instructions, see the Router README. This document provides further examples for using the Dynamo Router, including Python API usage, Kubernetes deployments, and custom routing patterns.
Table of Contents
- Using KvPushRouter Python API
- K8s Examples
- Routing Patterns
- Custom Routing Example: Minimizing TTFT
- KV Event Publishing for Custom Engines
Using KvPushRouter Python API
Instead of launching the KV Router via command line, you can create a KvPushRouter object directly in Python. This allows per-request routing configuration overrides.
[!Warning] Multiple Routers in Same Process: If you need to run multiple
KvPushRouterinstances for fault tolerance or load distribution, you must launch them in separate processes (e.g., usingpython -m dynamo.frontendwith different ports). Creating multipleKvPushRouterobjects in the same Python process is not supported - they share the same cancellation token from the component’s primary lease, so dropping one router will cancel all routers in that process. For in-process routing, use a singleKvPushRouterinstance.
Methods
The KvPushRouter provides the following methods:
-
generate(token_ids, model, ...): Route and execute a request, returning an async stream of responses. Automatically handles worker selection, state tracking, and lifecycle management. -
best_worker(token_ids, router_config_override=None, request_id=None): Query which worker would be selected for given tokens. Returns(worker_id, dp_rank, overlap_blocks).- Without
request_id: Query-only, doesn’t update router state - With
request_id: Updates router state to track the request. Note: If used withrequest_id, you must callmark_prefill_complete()andfree()at the appropriate lifecycle points to maintain accurate load tracking
- Without
-
get_potential_loads(token_ids): Get detailed load information for all workers, including potential prefill tokens and active decode blocks. Returns a list of load dictionaries. -
mark_prefill_complete(request_id): Signal that a request has completed its prefill phase. Only used for manual lifecycle management when usingbest_worker()for manual routing instead ofgenerate(). -
free(request_id): Signal that a request has completed and its resources should be released. Only used for manual lifecycle management when usingbest_worker()for manual routing instead ofgenerate(). -
dump_events(): Dump all KV cache events from the router’s indexer as a JSON string. Useful for debugging and analysis.
Setup
First, launch your backend engines:
Example Script
K8s Examples
For basic Kubernetes deployment with the KV Router, see the Kubernetes Deployment section in the Quick Start guide.
Complete K8s Examples
- TRT-LLM aggregated router example
- vLLM aggregated router example
- SGLang aggregated router example
- Distributed inference tutorial
For A/B Testing and Advanced K8s Setup: See the comprehensive KV Router A/B Benchmarking Guide for step-by-step instructions on deploying, configuring, and benchmarking the KV router in Kubernetes.
Example with Advanced Configuration
Alternative: Using Command Args in K8s
You can also pass CLI arguments directly in the container command:
Recommendation: Use environment variables for easier configuration management and consistency with Dynamo’s K8s patterns.
Routing Patterns
The KvPushRouter supports multiple usage patterns depending on your control requirements:
1. Automatic Routing (Recommended)
Call generate() directly and let the router handle everything:
- Best for: Most use cases
- Router automatically: Selects best worker, updates state, routes request, tracks lifecycle
2. Manual State Management (Advanced)
Use best_worker(request_id=...) to select and track, then manage the request yourself:
- Best for: Custom request handling with router state tracking
- Requires: Calling
mark_prefill_complete()andfree()at correct lifecycle points - Caution: Incorrect lifecycle management degrades load balancing accuracy
3. Hierarchical Router Probing
Query without state updates, then route through a chosen router:
- Best for: Multi-tier deployments (e.g., Envoy Gateway routing to multiple router groups)
- Advantage: Query multiple routers before committing to one
4. Custom Load-Based Routing
Use get_potential_loads() to implement custom routing logic:
- Best for: Custom optimization strategies beyond the built-in cost function
- Advantage: Full control over worker selection logic
- See also: Detailed example below in “Custom Routing Example: Minimizing TTFT”
All patterns support router_config_override to adjust routing behavior per-request without recreating the router.
Custom Routing Example: Minimizing TTFT
Here’s an example of using get_potential_loads() to implement custom routing that minimizes Time To First Token (TTFT) by selecting the worker with the least prefill work:
This approach gives you complete control over routing decisions, allowing you to optimize for different metrics based on your specific requirements. As some examples:
- Minimize TTFT: Select worker with lowest
potential_prefill_tokens - Maximize cache reuse: Use
best_worker()which considers both prefill and decode loads - Balance load: Consider both
potential_prefill_tokensandpotential_decode_blockstogether
See Router Design for architecture details and the cost function algorithm.
KV Event Publishing for Custom Engines
The KV Router relies on real-time events from backend workers to track which KV cache blocks are stored on each worker. When your custom engine allocates or evicts KV cache blocks, it should publish these events so the router can make optimal routing decisions. There are two main publishing pathways: direct NATS publishing (KvEventPublisher) which publishes events directly to NATS and is the simplest approach for custom engines, and ZMQ-based publishing for engines with ZMQ event output (like vLLM) which uses a ZMQ publisher in the engine and ZmqKvEventPublisher to forward events to NATS.
Event Types
The KV cache supports three event types:
Event Structure
Each event contains:
event_id: Monotonically increasing identifier per workerdp_rank: Data parallel rank (0 if DP not enabled)data: One ofStored,Removed, orCleared
For BlockStored events:
token_ids: List of token IDs for the stored blocksblock_hashes: List of sequence block hashes from the engine’s block manager. These are cumulative hashes that incorporate all tokens from the start of the sequence up to and including the current block (not just the tokens within that block). This enables prefix matching across requests.num_block_tokens: Number of tokens per block (should all equalkv_block_size)parent_hash: Hash of the parent block. Required for all blocks except the first block in a sequence (which has no parent).lora_id: LoRA adapter ID (0 if not using LoRA)
For BlockRemoved events:
block_hashes: List of sequence block hashes being evicted
Option 1: Direct NATS Publishing (Recommended)
The KvEventPublisher class publishes events directly to NATS. This is the simplest approach for custom engines.
When to use:
- Building a custom inference engine from scratch
- Your engine doesn’t have a ZMQ-based event system
- You want the simplest integration path
Basic Setup
Integration with Your Engine
Option 2: ZMQ-based Publishing
For engines that publish events via ZMQ (like vLLM), this option uses two components that work together:
- ZMQ Publisher (in your engine) - Publishes events to a ZMQ socket
- ZmqKvEventPublisher (Dynamo binding) - Subscribes to ZMQ and forwards to NATS
When to use:
- Your engine already has a ZMQ-based event system (like vLLM)
- You’re integrating with a consolidator (like KVBM)
- You want to decouple event publishing from your engine’s main loop
Part 1: ZMQ Subscriber (Dynamo Bindings)
If your engine already publishes to ZMQ, use ZmqKvEventPublisher to subscribe and forward to NATS:
Part 2: ZMQ Publisher (Pure Python)
If your engine needs to publish to ZMQ (e.g., for consolidator integration), implement the ZMQ protocol:
ZMQ Wire Format
The ZMQ message format (compatible with vLLM):
Each event in the payload is a dictionary with type field (BlockStored, BlockRemoved, or AllBlocksCleared).
Best Practices
-
Event IDs must be monotonically increasing per worker (use a thread-safe counter)
-
Block size must match your engine’s actual
kv_block_size -
parent_hashis required for all blocks except the first in a sequence - it links blocks to enable prefix matching
See Also
- Router README: Quick start guide for the KV Router
- Router Guide: Configuration, tuning, and production setup
- Router Design: Architecture details and event transport modes