Additional Resources

KV Router A/B Testing

View as Markdown

This guide walks you through setting up and running A/B benchmarks to compare Dynamo’s KV Smart Router against standard round-robin routing on a Kubernetes cluster.

Overview

Dynamo’s KV Smart Router intelligently routes requests based on KV cache affinity, improving performance for workloads with shared prompt prefixes. This guide helps you:

  1. Deploy two identical Dynamo configurations: a. A vllm server for Qwen3-32B with 8 workers (aggregated) WITHOUT KV Smart Router enabled b. A vllm server for Qwen3-32B with 8 workers (aggregated) WITH KV Smart Router enabled
  2. Run controlled benchmarks using AIPerf
  3. Compare performance metrics to evaluate KV router effectiveness

Prerequisites: Kubernetes cluster with GPUs, kubectl, helm


Prerequisites

Required Tools

  • kubectl (configured with cluster access)
  • helm (v3+)
  • HuggingFace account and token (if model downloads are gated)
  • Kubernetes cluster with:
    • GPU nodes (H100, H200, or similar)
    • Sufficient GPU capacity (16+ GPUs recommended for this example)
    • Dynamo platform installed globally OR ability to install per-namespace

Knowledge Requirements

  • Basic Kubernetes concepts (namespaces, pods, services)
  • Familiarity with LLM inference concepts
  • Command-line proficiency

Architecture

This guide sets up two parallel deployments, as well as a benchmarking pod that can test each deployment:

┌─────────────────────────────────────┐
│ Deployment A: Router OFF │
│ Namespace: router-off-test │
│ ├─ Frontend (Standard Routing) │
│ └─ 8x Decode Workers (1 GPU each) │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Deployment B: Router ON │
│ Namespace: router-on-test │
│ ├─ Frontend (KV Smart Router) │
│ └─ 8x Decode Workers (1 GPU each) │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Benchmark Pod │
│ Namespace: benchmark │
│ └─ AIPerf + Dataset │
└─────────────────────────────────────┘

Key Difference: Deployment B sets DYN_ROUTER_MODE=kv on the frontend to enable KV cache-aware routing.


Phase 1: Namespace and Infrastructure Setup

Step 1.1: Create Namespaces

$# Create namespaces for both deployments
$kubectl create namespace router-off-test
$kubectl create namespace router-on-test
$kubectl create namespace benchmark

Step 1.2: Create HuggingFace Token Secret (optional)

If the model you’re seeking to deploy requires HF token to download (Llama family models require this), replace YOUR_HF_TOKEN with your actual HuggingFace token:

$# Router-OFF namespace
$kubectl create secret generic hf-token-secret \
> --from-literal=HF_TOKEN="YOUR_HF_TOKEN" \
> -n router-off-test
$
$# Router-ON namespace
$kubectl create secret generic hf-token-secret \
> --from-literal=HF_TOKEN="YOUR_HF_TOKEN" \
> -n router-on-test

Step 1.3: Install Dynamo Platform (Per-Namespace)

If your cluster uses namespace-restricted Dynamo operators, you’ll need to install the Dynamo platform in each namespace. Follow the Dynamo Kubernetes Installation Guide to install the platform in both namespaces:

  • router-off-test
  • router-on-test

Key Configuration Notes:

  • If your cluster uses namespace restrictions, ensure dynamo-operator.namespaceRestriction.enabled=true is set during installation
  • Adjust version tags to match your cluster’s available Dynamo versions
  • If you encounter operator compatibility issues (e.g., unsupported MPI arguments), consult your cluster administrator or the Dynamo troubleshooting documentation

Step 1.4: Verify Infrastructure

Wait for operators and infrastructure to be ready:

$# Check router-off-test
$kubectl get pods -n router-off-test
$
$# Check router-on-test
$kubectl get pods -n router-on-test

You should see:

  • dynamo-platform-dynamo-operator-controller-manager (2/2 Running)
  • dynamo-platform-etcd-0 (1/1 Running)
  • dynamo-platform-nats-0 (2/2 Running)

Phase 2: Deploy Model Serving

Step 2.1: Create Deployment YAMLs

Create router-off-deployment.yaml:

1apiVersion: nvidia.com/v1alpha1
2kind: DynamoGraphDeployment
3metadata:
4 name: vllm-agg-no-router
5spec:
6 services:
7 Frontend:
8 dynamoNamespace: vllm-agg-no-router
9 componentType: frontend
10 replicas: 1
11 extraPodSpec:
12 mainContainer:
13 image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
14 VllmDecodeWorker:
15 envFromSecret: hf-token-secret
16 dynamoNamespace: vllm-agg-no-router
17 componentType: worker
18 replicas: 8
19 resources:
20 limits:
21 gpu: "1"
22 extraPodSpec:
23 affinity:
24 nodeAffinity:
25 requiredDuringSchedulingIgnoredDuringExecution:
26 nodeSelectorTerms:
27 - matchExpressions:
28 - key: node.kubernetes.io/instance-type
29 operator: In
30 values:
31 - gpu-h200-sxm # Adjust to your GPU node type
32 mainContainer:
33 image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
34 workingDir: /workspace/examples/backends/vllm
35 command:
36 - /bin/sh
37 - -c
38 args:
39 - python3 -m dynamo.vllm --model Qwen/Qwen3-32B --quantization fp8
40 startupProbe:
41 httpGet:
42 path: /health
43 port: 9090
44 initialDelaySeconds: 120
45 periodSeconds: 30
46 timeoutSeconds: 10
47 failureThreshold: 60 # 32 minutes total (120s + 60*30s)
48 livenessProbe:
49 httpGet:
50 path: /live
51 port: 9090
52 initialDelaySeconds: 300
53 periodSeconds: 30
54 timeoutSeconds: 10
55 failureThreshold: 10
56 readinessProbe:
57 httpGet:
58 path: /live
59 port: 9090
60 initialDelaySeconds: 300
61 periodSeconds: 30
62 timeoutSeconds: 10
63 failureThreshold: 10

Create router-on-deployment.yaml:

1apiVersion: nvidia.com/v1alpha1
2kind: DynamoGraphDeployment
3metadata:
4 name: vllm-agg-router
5spec:
6 services:
7 Frontend:
8 dynamoNamespace: vllm-agg-router
9 componentType: frontend
10 replicas: 1
11 extraPodSpec:
12 mainContainer:
13 image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
14 envs:
15 - name: DYN_ROUTER_MODE
16 value: kv # KEY DIFFERENCE: Enable KV Smart Router
17 VllmDecodeWorker:
18 envFromSecret: hf-token-secret
19 dynamoNamespace: vllm-agg-router
20 componentType: worker
21 replicas: 8
22 resources:
23 limits:
24 gpu: "1"
25 extraPodSpec:
26 affinity:
27 nodeAffinity:
28 requiredDuringSchedulingIgnoredDuringExecution:
29 nodeSelectorTerms:
30 - matchExpressions:
31 - key: node.kubernetes.io/instance-type
32 operator: In
33 values:
34 - gpu-h200-sxm # Adjust to your GPU node type
35 mainContainer:
36 image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
37 workingDir: /workspace/examples/backends/vllm
38 command:
39 - /bin/sh
40 - -c
41 args:
42 - python3 -m dynamo.vllm --model Qwen/Qwen3-32B --quantization fp8
43 startupProbe:
44 httpGet:
45 path: /health
46 port: 9090
47 initialDelaySeconds: 120
48 periodSeconds: 30
49 timeoutSeconds: 10
50 failureThreshold: 60 # 32 minutes total (120s + 60*30s)
51 livenessProbe:
52 httpGet:
53 path: /live
54 port: 9090
55 initialDelaySeconds: 300
56 periodSeconds: 30
57 timeoutSeconds: 10
58 failureThreshold: 10
59 readinessProbe:
60 httpGet:
61 path: /live
62 port: 9090
63 initialDelaySeconds: 300
64 periodSeconds: 30
65 timeoutSeconds: 10
66 failureThreshold: 10

Step 2.2: Deploy Both Configurations

$# Deploy router-OFF
$kubectl apply -f router-off-deployment.yaml -n router-off-test
$
$# Deploy router-ON
$kubectl apply -f router-on-deployment.yaml -n router-on-test

💡 Optimization Tip: Each worker will download the model independently (~20 minutes per pod). For faster initialization, add a shared PVC with ReadWriteMany access mode to cache the model.

First, create the PVC separately:

1apiVersion: v1
2kind: PersistentVolumeClaim
3metadata:
4 name: model-cache
5spec:
6 accessModes:
7 - ReadWriteMany
8 storageClassName: "your-shared-storage-class" # e.g., nfs, efs, nebius-shared-fs
9 resources:
10 requests:
11 storage: 100Gi

Then reference it in your DynamoGraphDeployment:

1spec:
2 pvcs:
3 - create: false
4 name: model-cache
5 size: "0"
6 services:
7 VllmDecodeWorker:
8 volumeMounts:
9 - mountPoint: /root/.cache/huggingface
10 name: model-cache
11 useAsCompilationCache: false

With this configuration, only the first worker downloads the model; others use the cached version, reducing startup time from 20+ minutes to ~2 minutes per pod.

Step 2.3: Monitor Deployment Progress

$# Watch router-OFF pods
$kubectl get pods -n router-off-test -w
$
$# Watch router-ON pods
$kubectl get pods -n router-on-test -w

Wait for all pods to reach Running status and pass readiness probes.

Expected Timeline:

  • With shared PVC (ReadWriteMany): ~5-10 minutes total (first worker downloads, others reuse cache)
  • Without shared PVC: 20-30 minutes per worker (workers download independently)
    • For 8 workers: Budget 1-2 hours for full deployment (workers start in parallel but are limited by node scheduling)

The startup probe allows 32 minutes per pod (failureThreshold: 60), which accommodates model download and initialization.

Step 2.4: Verify All Workers Are Healthy

⚠️ CRITICAL CHECKPOINT: Before running benchmarks, you MUST verify equal worker health in both deployments. Unequal worker counts will invalidate your comparison results.

$# Quick health check - both should show "8/8"
$echo "Router OFF: $(kubectl get pods -n router-off-test -l nvidia.com/dynamo-component-type=worker --field-selector=status.phase=Running -o json | jq '[.items[] | select(.status.conditions[] | select(.type=="Ready" and .status=="True"))] | length')/8 ready"
$echo "Router ON: $(kubectl get pods -n router-on-test -l nvidia.com/dynamo-component-type=worker --field-selector=status.phase=Running -o json | jq '[.items[] | select(.status.conditions[] | select(.type=="Ready" and .status=="True"))] | length')/8 ready"
$
$# Detailed view
$kubectl get pods -n router-off-test -l nvidia.com/dynamo-component-type=worker
$kubectl get pods -n router-on-test -l nvidia.com/dynamo-component-type=worker

Both must show 8/8 workers in Ready state (1/1 Running). If workers are not ready:

  • Check logs: kubectl logs -n <namespace> <pod-name>
  • Common issues: model download in progress, startup probe timeout, insufficient GPU resources

Do not proceed with benchmarks until all 16 workers (8 per deployment) are healthy.


Phase 3: Prepare Benchmark Dataset

Understanding the Mooncake Trace Dataset

For this A/B comparison, we use the Mooncake Trace Dataset, published by Mooncake AI. This is a privacy-preserving dataset of real-world LLM inference traffic from production arxiv workloads.

What’s in the dataset? Each trace entry contains:

  • Timestamp: When the request arrived (for realistic request timing)
  • Input/output lengths: Number of tokens in prompts and responses
  • Block hash IDs: Cryptographic hashes representing KV cache blocks (explained below)

Sample trace entry:

1{
2 "timestamp": 27482,
3 "input_length": 6955,
4 "output_length": 52,
5 "hash_ids": [46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 2353, 2354]
6}

Why Mooncake Traces Matter for KV Cache Benchmarking

The Challenge: Traditional LLM benchmarks use synthetic or random data, which are often insufficient to capture real-world optimizations like KV Smart Router. To properly evaluate this feature, we need realistic traffic patterns with prefix repetition - but this creates a privacy problem: how do we measure realistic KV cache hit patterns without exposing actual user conversations?

Mooncake’s Solution: Privacy-Preserving Block Hashes

Instead of storing actual prompt text, the Mooncake dataset uses cryptographic hashes to represent KV cache blocks. Each hash ID represents a 512-token block, and the hash includes both the current block and all preceding blocks. This preserves the pattern of prefix reuse while completely protecting user privacy.

How it works - Multi-turn conversation example

Turn 1 (initial request - long document analysis):
Input: ~8,000 tokens (e.g., research paper + question)
Hash IDs: [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61]
└─ 16 blocks × 512 tokens/block = ~8,192 tokens
Turn 2 (follow-up question on same document):
Input: Same document + new question (~8,500 tokens)
Hash IDs: [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62]
└──────────── Reuses first 16 blocks (~8,192 tokens) ───────────────┘
✅ Cache hit: First 8,192 tokens don't need recomputation!
Turn 3 (another follow-up):
Input: Same document + different question (~9,000 tokens)
Hash IDs: [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63]
└──────────── Reuses first 16 blocks (~8,192 tokens) ───────────────┘

When requests share the same hash IDs (e.g., blocks 46-61), it means they share those 512-token blocks - indicating significant prefix overlap (in this case, 8,192 tokens). The KV Smart Router routes requests with matching hash IDs to the same worker, maximizing cache hits and avoiding redundant computation for those shared prefix tokens.

Key Dataset Properties:

  • Realistic timing: Request arrival patterns from production workloads
  • Real prefix patterns: Up to 50% cache hit ratio (Mooncake technical report)
  • Privacy-preserving: No actual text - only hash-based cache block identifiers
  • Reproducible: Public dataset enables fair comparisons across different systems

Why this matters: With random synthetic data, the KV Smart Router would show no benefit because there’s no prefix reuse to exploit. Mooncake traces provide realistic workload patterns that demonstrate the router’s real-world performance gains while respecting user privacy.


Download and Prepare the Dataset

$# Download the Mooncake arxiv trace dataset
$curl -sL https://raw.githubusercontent.com/kvcache-ai/Mooncake/refs/heads/main/FAST25-release/arxiv-trace/mooncake_trace.jsonl -o mooncake_trace.jsonl
$
$# Trim to 1000 requests for faster benchmarking
$head -n 1000 mooncake_trace.jsonl > mooncake_trace_small.jsonl
$
$# Speed up timestamps 4x (reduces benchmark time from ~12 min to ~3 min)
$python3 - <<'PY'
$import json
$
$with open("mooncake_trace_small.jsonl") as src, open("mooncake_trace_4x.jsonl", "w") as dst:
$ for line in src:
$ rec = json.loads(line)
$ rec["timestamp"] = int(rec["timestamp"] / 4)
$ dst.write(json.dumps(rec) + "\n")
$PY
$
$echo "Dataset ready: mooncake_trace_4x.jsonl (1000 requests, 4x speed)"

Phase 4: Set Up Benchmark Environment

Step 4.1: Deploy Benchmark Pod

Create benchmark-job.yaml:

1apiVersion: batch/v1
2kind: Job
3metadata:
4 name: aiperf-benchmark
5 namespace: benchmark
6spec:
7 backoffLimit: 1
8 template:
9 spec:
10 restartPolicy: Never
11 containers:
12 - name: benchmark
13 image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
14 command: ["/bin/sh", "-c", "sleep infinity"]
15 imagePullPolicy: IfNotPresent
16 resources:
17 limits:
18 nvidia.com/gpu: 0

Deploy:

$kubectl apply -f benchmark-job.yaml

Wait for pod to be ready:

$kubectl get pods -n benchmark

Step 4.2: Copy Dataset to Benchmark Pod

$POD_NAME=$(kubectl get pods -n benchmark -l job-name=aiperf-benchmark -o jsonpath='{.items[0].metadata.name}')
$
$kubectl -n benchmark cp mooncake_trace_4x.jsonl ${POD_NAME}:/tmp/mooncake_trace_4x.jsonl

Step 4.3: Install AIPerf

$kubectl -n benchmark exec ${POD_NAME} -- bash -lc '. /opt/dynamo/venv/bin/activate && pip install -q aiperf'

Phase 5: Run Benchmarks

Step 5.1: Benchmark Router-OFF (Baseline)

$kubectl -n benchmark exec ${POD_NAME} -- bash -lc '
> . /opt/dynamo/venv/bin/activate
> aiperf profile \
> --model "Qwen/Qwen3-32B" \
> --url "http://vllm-agg-no-router-frontend.router-off-test.svc.cluster.local:8000" \
> --endpoint-type chat \
> --input-file /tmp/mooncake_trace_4x.jsonl \
> --custom-dataset-type mooncake_trace \
> --tokenizer "Qwen/Qwen3-32B" \
> --streaming \
> --request-count 1000 \
> --fixed-schedule \
> --output-artifact-dir /tmp/router_off_results
>'

Note: This will take 3-5 minutes. The terminal output includes a summary table.

Step 5.2: Benchmark Router-ON (KV Smart Router)

$kubectl -n benchmark exec ${POD_NAME} -- bash -lc '
> . /opt/dynamo/venv/bin/activate
> aiperf profile \
> --model "Qwen/Qwen3-32B" \
> --url "http://vllm-agg-router-frontend.router-on-test.svc.cluster.local:8000" \
> --endpoint-type chat \
> --input-file /tmp/mooncake_trace_4x.jsonl \
> --custom-dataset-type mooncake_trace \
> --tokenizer "Qwen/Qwen3-32B" \
> --streaming \
> --request-count 1000 \
> --fixed-schedule \
> --output-artifact-dir /tmp/router_on_results
>'

Step 5.3: Collect Results

$# Copy results to local machine
$kubectl -n benchmark cp ${POD_NAME}:/tmp/router_off_results/profile_export_aiperf.csv ./router_off_results.csv
$kubectl -n benchmark cp ${POD_NAME}:/tmp/router_on_results/profile_export_aiperf.csv ./router_on_results.csv

Phase 6: Analyze Results

Key Metrics to Compare

MetricDescriptionWhat to Look For
Time to First Token (TTFT)Latency until first token arrivesLower is better; KV router may reduce with prefix reuse
Inter Token Latency (ITL)Average time between tokensLower is better; indicates generation speed
Request LatencyTotal end-to-end latencyLower is better; overall user experience
Output Token ThroughputTokens generated per second (system-wide)Higher is better; system efficiency
Request ThroughputRequests completed per secondHigher is better; capacity

Interpreting Results

Your Results May Vary: The improvement from KV Smart Router depends heavily on your workload characteristics:

Factors that increase KV router benefit:

  • High prefix overlap (shared system prompts, templates, document contexts)
  • Long prompts (>2000 tokens) where caching saves significant compute
  • Multi-turn conversations with context carryover
  • Batch workloads with similar queries

Factors that reduce KV router benefit:

  • Unique prompts with no prefix reuse
  • Short prompts (<1000 tokens) where routing overhead exceeds benefit
  • Evenly distributed load where round-robin is already optimal
  • Low request rate where cache eviction negates benefits

Expected Performance:

  • High prefix overlap workloads: 20-50% TTFT improvement
  • Moderate prefix overlap: 10-20% improvement
  • Low prefix overlap: <5% improvement (may not be worth enabling)

KV Smart Router is beneficial when:

  • TTFT improvements > 20%
  • No significant degradation in other metrics
  • Workload demonstrates measurable prefix reuse patterns

Standard routing is better when:

  • KV router shows <10% improvement
  • Increased latency variance is observed
  • Load distribution across workers is more important than cache affinity

Example Comparison

From the terminal output, compare the summary tables:

Router-OFF (Baseline):
TTFT avg: 12,764 ms p99: 45,898 ms
Request Latency avg: 32,978 ms
Output Token Throughput: 1,614 tokens/sec
Request Throughput: 8.61 req/sec
Router-ON (KV Router):
TTFT avg: 8,012 ms p99: 28,644 ms (37% faster ✅)
Request Latency avg: 28,972 ms (12% faster ✅)
Output Token Throughput: 1,746 tokens/sec (8% higher ✅)
Request Throughput: 9.33 req/sec (8% higher ✅)

In this example with all 8 workers healthy, the KV router significantly outperformed the baseline:

  • 37% faster TTFT - Users see first token much sooner
  • 8% higher throughput - System processes more requests per second
  • 12% lower latency - Faster end-to-end completion

The Mooncake arxiv dataset has sufficient prefix overlap (long input sequences with similar patterns) to benefit from KV cache-aware routing. Workloads with explicit shared prefixes (system prompts, templates) may see even greater improvements.


Phase 7: Cleanup

$# Delete deployments
$kubectl delete dynamographdeployment vllm-agg-no-router -n router-off-test
$kubectl delete dynamographdeployment vllm-agg-router -n router-on-test
$
$# Delete namespaces (removes all resources)
$kubectl delete namespace router-off-test
$kubectl delete namespace router-on-test
$kubectl delete namespace benchmark

Troubleshooting

Issue: Pods Stuck in Pending

Cause: Insufficient GPU resources

Solution:

$# Check GPU availability
$kubectl describe nodes | grep -A 10 "Allocated resources"
$
$# Reduce worker replicas if needed
$kubectl edit dynamographdeployment -n <namespace>

Issue: ImagePullBackOff Errors

Cause: Version mismatch or missing credentials

Solution:

$# Check available versions
$kubectl get pods -n dynamo-system -o yaml | grep image:
$
$# Update deployment YAML to match cluster version

Issue: Operator Not Processing Deployment

Cause: Namespace restrictions

Solution:

  • Ensure Dynamo platform is Helm-installed in the namespace
  • Verify operator has --restrictedNamespace=<your-namespace> argument
  • Check operator logs: kubectl logs -n <namespace> deployment/dynamo-platform-dynamo-operator-controller-manager

Issue: Workers Not Becoming Ready

Cause: Model download failures or probe configuration

Solution:

$# Check worker logs
$kubectl logs -n <namespace> <worker-pod-name>
$
$# Common issues:
$# - Invalid HuggingFace token
$# - Network connectivity
$# - Insufficient disk space for model

Issue: Workers Restarting in CrashLoopBackOff

Cause: Startup probe timeout - workers killed before finishing initialization

Symptoms:

  • Pods show “Container main failed startup probe, will be restarted”
  • Logs show model still downloading or loading when pod is killed
  • Large models (>30GB) take longer than default 22-minute timeout

Solution: Increase the startup probe failureThreshold:

$# Patch the deployment to allow 32 minutes instead of 22
$kubectl patch dynamographdeployment <deployment-name> -n <namespace> --type='json' \
> -p='[{"op": "replace", "path": "/spec/services/VllmDecodeWorker/extraPodSpec/mainContainer/startupProbe/failureThreshold", "value": 60}]'

Or update your YAML before deploying:

1startupProbe:
2 httpGet:
3 path: /health
4 port: 9090
5 initialDelaySeconds: 120
6 periodSeconds: 30
7 timeoutSeconds: 10
8 failureThreshold: 60 # 32 minutes total (120s + 60*30s)

Model Loading Times (approximate):

  • Qwen3-32B: ~20-25 minutes (first download)
  • Llama-70B: ~25-30 minutes (first download)
  • With cached model on node: ~2-5 minutes

Issue: Unequal Worker Health

Cause: Resource constraints, image pull issues, or configuration errors

Solution:

$# Check all worker status
$kubectl get pods -n <namespace> -l nvidia.com/dynamo-component-type=worker
$
$# Describe problematic pods
$kubectl describe pod <pod-name> -n <namespace>
$
$# Fix issues before benchmarking or results will be skewed

Advanced Configuration

Testing Different Models

Replace Qwen/Qwen3-32B with your model in:

  • Deployment YAML args section
  • AIPerf --model and --tokenizer parameters

Adjusting Worker Count

Change replicas: 8 in the deployment YAMLs. Ensure both deployments use the same count for fair comparison.

Using Custom Datasets

Replace mooncake dataset with your own JSONL file:

  • Format: One request per line with timestamp field
  • AIPerf supports various formats via --custom-dataset-type

Disaggregated Prefill/Decode

For advanced testing, add separate prefill workers:

1VllmPrefillWorker:
2 componentType: worker
3 replicas: 2
4 # ... configuration

Best Practices

  1. Equal Conditions: Ensure both deployments have identical worker counts and health before benchmarking
  2. Warm-Up: Run a small test (100 requests) before the full benchmark to warm up caches
  3. Multiple Runs: Run benchmarks 3+ times and average results for statistical significance
  4. Monitor Workers: Watch for any pod restarts or issues during benchmark runs
  5. Document Conditions: Record cluster state, worker health, and any anomalies
  6. Test Relevant Workloads: Use datasets that match your actual use case for meaningful results

Conclusion

This guide provides a complete methodology for A/B testing Dynamo’s KV Smart Router. The KV router’s effectiveness depends heavily on workload characteristics—datasets with high prefix overlap will show the most benefit.

For questions or issues, consult the Dynamo documentation or open an issue on GitHub.


Appendix: Files Reference

  • router-off-deployment.yaml: Standard routing deployment
  • router-on-deployment.yaml: KV router enabled deployment
  • benchmark-job.yaml: AIPerf benchmark pod
  • prepare-dataset.sh: Dataset preparation script
  • Results CSVs: Detailed metrics from AIPerf

Repository: https://github.com/ai-dynamo/dynamo