KV Events for Custom Engines
KV Event Publishing for Custom Engines
This document explains how to implement KV event publishing for custom inference engines, enabling them to participate in Dynamo’s KV cache-aware routing.
Overview
The KV Router relies on real-time events from backend workers to track which KV cache blocks are stored on each worker. When your custom engine allocates or evicts KV cache blocks, it should publish these events so the router can make optimal routing decisions.
Events are published over the Dynamo event plane, a transport-agnostic pub/sub layer that supports both NATS and ZMQ backends (see Event Plane for details). The KvEventPublisher binding handles all transport concerns — your engine code does not interact with the event plane directly.
KvEventPublisher supports two publishing modes:
- Direct publishing — Your engine calls
publish_stored()/publish_removed()to push events directly over the event plane. Simplest approach for custom engines. - ZMQ relay — For engines that emit raw KV events over a ZMQ socket (like vLLM and SGLang). The publisher subscribes to the ZMQ endpoint and relays events to the event plane automatically.
Event Types
The KV cache supports three event types:
Event Structure
Each event contains:
event_id: Monotonically increasing identifier per worker (managed internally by the publisher)dp_rank: Data parallel rank (0 if DP not enabled)data: One ofStored,Removed, orCleared
For BlockStored events:
token_ids: List of token IDs for the stored blocksblock_hashes: List of sequence block hashes from the engine’s block manager. These are cumulative hashes that incorporate all tokens from the start of the sequence up to and including the current block (not just the tokens within that block). This enables prefix matching across requests.num_block_tokens: Number of tokens per block (should all equalkv_block_size)parent_hash: Hash of the parent block. Required for all blocks except the first block in a sequence (which has no parent).lora_name: LoRA adapter name string (omit orNonefor base model). When set, the adapter name is incorporated into block hash computation so that blocks for different LoRA adapters (or the base model) are never conflated.
For BlockRemoved events:
block_hashes: List of sequence block hashes being evicted
Direct Publishing (Recommended for Custom Engines)
Call publish_stored() and publish_removed() directly from your engine code. The publisher handles event IDs, serialization, and transport.
When to use:
- Building a custom inference engine from scratch
- Your engine doesn’t have a ZMQ-based event system
- You want the simplest integration path
Basic Setup
Integration with Your Engine
ZMQ Relay (For Engines with Raw KV Events)
For engines that already publish raw KV events over a ZMQ socket (like vLLM and SGLang), use the same KvEventPublisher with a zmq_endpoint. The publisher subscribes to the ZMQ socket and relays events to the event plane automatically.
When to use:
- Your engine already publishes KV events via ZMQ (like vLLM or SGLang)
- You want to decouple event publishing from your engine’s main loop
Setup
Pass zmq_endpoint (and optional zmq_topic) to the same KvEventPublisher:
No further calls to publish_stored() / publish_removed() are needed — the publisher reads events from the ZMQ socket and forwards them automatically.
ZMQ Wire Format
The ZMQ message format (compatible with vLLM / SGLang):
Each event in the payload is a dictionary with a type field (BlockStored, BlockRemoved, or AllBlocksCleared).
For BlockStored:
For BlockRemoved:
For AllBlocksCleared:
API Reference
KvEventPublisher
publish_stored()
Publish a block-stored event. Event IDs are managed internally. When lora_name is provided, the adapter name is mixed into block hash computation so blocks cached under different adapters produce distinct hashes.
publish_removed()
Publish a block-removed event. Event IDs are managed internally.
shutdown()
Stop background tasks (ZMQ listener, event forwarding).
Best Practices
-
kv_block_sizemust match your engine’s actual block size. -
parent_hashis required for all blocks except the first in a sequence — it links blocks to enable prefix matching. -
Block hashes are signed 64-bit integers in the Python API. The publisher handles conversion internally.
-
Event ordering is automatic — the publisher assigns monotonically increasing event IDs. You do not need to track event IDs yourself.
See Also
- Event Plane: Transport options (NATS, ZMQ) and configuration
- Router Guide: Configuration, tuning, and production setup
- Router Design: Architecture details and event transport modes