KV Router A/B Testing
This guide walks you through setting up and running A/B benchmarks to compare Dynamo’s KV Smart Router against standard round-robin routing on a Kubernetes cluster.
Overview
Dynamo’s KV Smart Router intelligently routes requests based on KV cache affinity, improving performance for workloads with shared prompt prefixes. This guide helps you:
- Deploy two identical Dynamo configurations: a. A vllm server for Qwen3-32B with 8 workers (aggregated) WITHOUT KV Smart Router enabled b. A vllm server for Qwen3-32B with 8 workers (aggregated) WITH KV Smart Router enabled
- Run controlled benchmarks using AIPerf
- Compare performance metrics to evaluate KV router effectiveness
Prerequisites: Kubernetes cluster with GPUs, kubectl, helm
Prerequisites
Required Tools
kubectl(configured with cluster access)helm(v3+)- HuggingFace account and token (if model downloads are gated)
- Kubernetes cluster with:
- GPU nodes (H100, H200, or similar)
- Sufficient GPU capacity (16+ GPUs recommended for this example)
- Dynamo platform installed globally OR ability to install per-namespace
Knowledge Requirements
- Basic Kubernetes concepts (namespaces, pods, services)
- Familiarity with LLM inference concepts
- Command-line proficiency
Architecture
This guide sets up two parallel deployments, as well as a benchmarking pod that can test each deployment:
Key Difference: Deployment B sets DYN_ROUTER_MODE=kv on the frontend to enable KV cache-aware routing.
Phase 1: Namespace and Infrastructure Setup
Step 1.1: Create Namespaces
Step 1.2: Create HuggingFace Token Secret (optional)
If the model you’re seeking to deploy requires HF token to download (Llama family models require this), replace YOUR_HF_TOKEN with your actual HuggingFace token:
Step 1.3: Install Dynamo Platform (Per-Namespace)
If your cluster uses namespace-restricted Dynamo operators, you’ll need to install the Dynamo platform in each namespace. Follow the Dynamo Kubernetes Installation Guide to install the platform in both namespaces:
router-off-testrouter-on-test
Key Configuration Notes:
- If your cluster uses namespace restrictions, ensure
dynamo-operator.namespaceRestriction.enabled=trueis set during installation - Adjust version tags to match your cluster’s available Dynamo versions
- If you encounter operator compatibility issues (e.g., unsupported MPI arguments), consult your cluster administrator or the Dynamo troubleshooting documentation
Step 1.4: Verify Infrastructure
Wait for operators and infrastructure to be ready:
You should see:
dynamo-platform-dynamo-operator-controller-manager(2/2 Running)dynamo-platform-etcd-0(1/1 Running)dynamo-platform-nats-0(2/2 Running)
Phase 2: Deploy Model Serving
Step 2.1: Create Deployment YAMLs
Create router-off-deployment.yaml:
Create router-on-deployment.yaml:
Step 2.2: Deploy Both Configurations
💡 Optimization Tip: Each worker will download the model independently (~20 minutes per pod). For faster initialization, add a shared PVC with ReadWriteMany access mode to cache the model.
First, create the PVC separately:
Then reference it in your DynamoGraphDeployment:
With this configuration, only the first worker downloads the model; others use the cached version, reducing startup time from 20+ minutes to ~2 minutes per pod.
Step 2.3: Monitor Deployment Progress
Wait for all pods to reach Running status and pass readiness probes.
Expected Timeline:
- With shared PVC (ReadWriteMany): ~5-10 minutes total (first worker downloads, others reuse cache)
- Without shared PVC: 20-30 minutes per worker (workers download independently)
- For 8 workers: Budget 1-2 hours for full deployment (workers start in parallel but are limited by node scheduling)
The startup probe allows 32 minutes per pod (failureThreshold: 60), which accommodates model download and initialization.
Step 2.4: Verify All Workers Are Healthy
⚠️ CRITICAL CHECKPOINT: Before running benchmarks, you MUST verify equal worker health in both deployments. Unequal worker counts will invalidate your comparison results.
Both must show 8/8 workers in Ready state (1/1 Running). If workers are not ready:
- Check logs:
kubectl logs -n <namespace> <pod-name> - Common issues: model download in progress, startup probe timeout, insufficient GPU resources
Do not proceed with benchmarks until all 16 workers (8 per deployment) are healthy.
Phase 3: Prepare Benchmark Dataset
Understanding the Mooncake Trace Dataset
For this A/B comparison, we use the Mooncake Trace Dataset, published by Mooncake AI. This is a privacy-preserving dataset of real-world LLM inference traffic from production arxiv workloads.
What’s in the dataset? Each trace entry contains:
- Timestamp: When the request arrived (for realistic request timing)
- Input/output lengths: Number of tokens in prompts and responses
- Block hash IDs: Cryptographic hashes representing KV cache blocks (explained below)
Sample trace entry:
Why Mooncake Traces Matter for KV Cache Benchmarking
The Challenge: Traditional LLM benchmarks use synthetic or random data, which are often insufficient to capture real-world optimizations like KV Smart Router. To properly evaluate this feature, we need realistic traffic patterns with prefix repetition - but this creates a privacy problem: how do we measure realistic KV cache hit patterns without exposing actual user conversations?
Mooncake’s Solution: Privacy-Preserving Block Hashes
Instead of storing actual prompt text, the Mooncake dataset uses cryptographic hashes to represent KV cache blocks. Each hash ID represents a 512-token block, and the hash includes both the current block and all preceding blocks. This preserves the pattern of prefix reuse while completely protecting user privacy.
How it works - Multi-turn conversation example
When requests share the same hash IDs (e.g., blocks 46-61), it means they share those 512-token blocks - indicating significant prefix overlap (in this case, 8,192 tokens). The KV Smart Router routes requests with matching hash IDs to the same worker, maximizing cache hits and avoiding redundant computation for those shared prefix tokens.
Key Dataset Properties:
- ✅ Realistic timing: Request arrival patterns from production workloads
- ✅ Real prefix patterns: Up to 50% cache hit ratio (Mooncake technical report)
- ✅ Privacy-preserving: No actual text - only hash-based cache block identifiers
- ✅ Reproducible: Public dataset enables fair comparisons across different systems
Why this matters: With random synthetic data, the KV Smart Router would show no benefit because there’s no prefix reuse to exploit. Mooncake traces provide realistic workload patterns that demonstrate the router’s real-world performance gains while respecting user privacy.
Download and Prepare the Dataset
Phase 4: Set Up Benchmark Environment
Step 4.1: Deploy Benchmark Pod
Create benchmark-job.yaml:
Deploy:
Wait for pod to be ready:
Step 4.2: Copy Dataset to Benchmark Pod
Step 4.3: Install AIPerf
Phase 5: Run Benchmarks
Step 5.1: Benchmark Router-OFF (Baseline)
Note: This will take 3-5 minutes. The terminal output includes a summary table.
Step 5.2: Benchmark Router-ON (KV Smart Router)
Step 5.3: Collect Results
Phase 6: Analyze Results
Key Metrics to Compare
Interpreting Results
Your Results May Vary: The improvement from KV Smart Router depends heavily on your workload characteristics:
Factors that increase KV router benefit:
- High prefix overlap (shared system prompts, templates, document contexts)
- Long prompts (>2000 tokens) where caching saves significant compute
- Multi-turn conversations with context carryover
- Batch workloads with similar queries
Factors that reduce KV router benefit:
- Unique prompts with no prefix reuse
- Short prompts (<1000 tokens) where routing overhead exceeds benefit
- Evenly distributed load where round-robin is already optimal
- Low request rate where cache eviction negates benefits
Expected Performance:
- High prefix overlap workloads: 20-50% TTFT improvement
- Moderate prefix overlap: 10-20% improvement
- Low prefix overlap: <5% improvement (may not be worth enabling)
KV Smart Router is beneficial when:
- TTFT improvements > 20%
- No significant degradation in other metrics
- Workload demonstrates measurable prefix reuse patterns
Standard routing is better when:
- KV router shows <10% improvement
- Increased latency variance is observed
- Load distribution across workers is more important than cache affinity
Example Comparison
From the terminal output, compare the summary tables:
In this example with all 8 workers healthy, the KV router significantly outperformed the baseline:
- 37% faster TTFT - Users see first token much sooner
- 8% higher throughput - System processes more requests per second
- 12% lower latency - Faster end-to-end completion
The Mooncake arxiv dataset has sufficient prefix overlap (long input sequences with similar patterns) to benefit from KV cache-aware routing. Workloads with explicit shared prefixes (system prompts, templates) may see even greater improvements.
Phase 7: Cleanup
Troubleshooting
Issue: Pods Stuck in Pending
Cause: Insufficient GPU resources
Solution:
Issue: ImagePullBackOff Errors
Cause: Version mismatch or missing credentials
Solution:
Issue: Operator Not Processing Deployment
Cause: Namespace restrictions
Solution:
- Ensure Dynamo platform is Helm-installed in the namespace
- Verify operator has
--restrictedNamespace=<your-namespace>argument - Check operator logs:
kubectl logs -n <namespace> deployment/dynamo-platform-dynamo-operator-controller-manager
Issue: Workers Not Becoming Ready
Cause: Model download failures or probe configuration
Solution:
Issue: Workers Restarting in CrashLoopBackOff
Cause: Startup probe timeout - workers killed before finishing initialization
Symptoms:
- Pods show “Container main failed startup probe, will be restarted”
- Logs show model still downloading or loading when pod is killed
- Large models (>30GB) take longer than default 22-minute timeout
Solution:
Increase the startup probe failureThreshold:
Or update your YAML before deploying:
Model Loading Times (approximate):
- Qwen3-32B: ~20-25 minutes (first download)
- Llama-70B: ~25-30 minutes (first download)
- With cached model on node: ~2-5 minutes
Issue: Unequal Worker Health
Cause: Resource constraints, image pull issues, or configuration errors
Solution:
Advanced Configuration
Testing Different Models
Replace Qwen/Qwen3-32B with your model in:
- Deployment YAML
argssection - AIPerf
--modeland--tokenizerparameters
Adjusting Worker Count
Change replicas: 8 in the deployment YAMLs. Ensure both deployments use the same count for fair comparison.
Using Custom Datasets
Replace mooncake dataset with your own JSONL file:
- Format: One request per line with
timestampfield - AIPerf supports various formats via
--custom-dataset-type
Disaggregated Prefill/Decode
For advanced testing, add separate prefill workers:
Best Practices
- Equal Conditions: Ensure both deployments have identical worker counts and health before benchmarking
- Warm-Up: Run a small test (100 requests) before the full benchmark to warm up caches
- Multiple Runs: Run benchmarks 3+ times and average results for statistical significance
- Monitor Workers: Watch for any pod restarts or issues during benchmark runs
- Document Conditions: Record cluster state, worker health, and any anomalies
- Test Relevant Workloads: Use datasets that match your actual use case for meaningful results
Conclusion
This guide provides a complete methodology for A/B testing Dynamo’s KV Smart Router. The KV router’s effectiveness depends heavily on workload characteristics—datasets with high prefix overlap will show the most benefit.
For questions or issues, consult the Dynamo documentation or open an issue on GitHub.
Appendix: Files Reference
router-off-deployment.yaml: Standard routing deploymentrouter-on-deployment.yaml: KV router enabled deploymentbenchmark-job.yaml: AIPerf benchmark podprepare-dataset.sh: Dataset preparation script- Results CSVs: Detailed metrics from AIPerf
Repository: https://github.com/ai-dynamo/dynamo