SGLang for Agentic Workloads
Priority scheduling, KV cache eviction policies, and cache pinning for multi-turn agentic serving
SGLang for Agentic Workloads
This guide covers SGLang-specific configuration for agentic serving with Dynamo. It explains which SGLang engine flags to enable, how Dynamo’s agent hints map to SGLang behavior, and how to use experimental cache pinning to protect KV cache for high-value conversations.
Overview
Agentic workloads (tool-calling loops, multi-turn reasoning, code generation pipelines) have different performance characteristics than batch inference:
- Prefix-heavy: Successive turns share a growing conversation prefix. KV cache reuse is critical for low TTFT.
- Priority-sensitive: Some requests (user-facing agent turns) matter more than background tasks.
- Long-lived: Conversations span minutes to hours. Cache eviction under memory pressure can destroy accumulated KV state.
Dynamo’s agent hints give the router per-request metadata. SGLang’s engine flags control how that metadata affects scheduling and eviction on the worker.
SGLang Engine Flags
Priority Scheduling
Enable priority-based scheduling so the engine respects the priority value from nvext.agent_hints.priority:
When priority scheduling is enabled, the engine uses the priority field from nvext.agent_hints to order requests in its internal queue. Requests with higher effective priority are scheduled before lower-priority ones. Ties are broken by arrival time.
Priority-Based KV Cache Eviction
By default, SGLang evicts radix tree nodes using LRU. You can switch to priority-based eviction so that low-priority cache entries are evicted before high-priority ones:
This does not require HiCache. It controls GPU-only radix tree eviction. When the GPU KV cache is full:
lru: Evicts the least recently used leaf nodes first.priority: Evicts lowest-priority leaf nodes first. Nodes with equal priority fall back to LRU ordering.
Interaction with HiCache
When both --radix-eviction-policy priority and --enable-hierarchical-cache are enabled, priority affects eviction at both tiers:
The practical impact depends on your write policy. With write_through, GPU eviction is just a demotion — the real deletion happens at host eviction, which is where priority ordering matters most.
How Agent Hints Map to SGLang
Dynamo’s nvext.agent_hints fields are consumed by the router and forwarded to SGLang workers. Here is how each hint interacts with the SGLang engine:
Example: Agentic Request with Hints
Cache Pinning (Experimental)
Required PRs:
- SGLang: feat: TTL-based prefix pinning with refresh-on-hit for HiRadixCache
- Dynamo: feat: wire nvext.cache_control TTL-based pinning through Dynamo router
Cache pinning lets you explicitly protect KV cache for high-value conversation prefixes. When a request includes nvext.cache_control, the router fires a pin_prefix call to the SGLang worker after generation completes. Pinned nodes resist eviction for the specified TTL — even under memory pressure, they are retained (demoted to host memory with HiCache rather than deleted).
How It Works
- The client includes
nvext.cache_controlwith a TTL in the request. - The Dynamo preprocessor extracts the TTL and attaches it to routing hints.
- The router routes the request normally and records the token IDs in a
PinState. - After the response stream completes, the router spawns a fire-and-forget
pin_prefixRPC to the worker that served the request. - The worker walks the radix tree along the token sequence and pins each node, setting
pin_expiryand acquiring ahost_ref_counterhold that prevents eviction. - When TTL expires, the pin is cleared and the node becomes eligible for normal eviction.
Enabling Cache Pinning
Frontend flag:
SGLang worker: The worker receives PIN requests via its cache_control service mesh endpoint. You must set the SGLANG_HICACHE_MAX_PINNED_RATIO environment variable to a non-zero value — pinning is disabled by default.
HiCache is required (--enable-hierarchical-cache). Without it, the scheduler rejects PIN requests. For best results, use write_through so that pinned nodes demote to host memory instead of being deleted when GPU memory fills:
Request Format
Include cache_control as a top-level field in nvext:
Python Example
Verifying Cache Hits
The response includes prompt_tokens_details.cached_tokens in the usage object when --enable-cache-report is set on the SGLang worker:
A high cached_tokens / prompt_tokens ratio on subsequent turns confirms that the pinned prefix was preserved.
Limitations
- Pinning disabled by default:
SGLANG_HICACHE_MAX_PINNED_RATIOdefaults to0.0. You must set it to a non-zero value (e.g.,0.1) or all PIN requests will be rejected. - HiCache required: The scheduler rejects PIN requests unless
--enable-hierarchical-cacheis set. - TTL clamping: Values are clamped to [300, 3600] seconds. You cannot pin for less than 5 minutes or more than 1 hour.
- Pin budget: Pinned tokens consume a budget controlled by
SGLANG_HICACHE_MAX_PINNED_RATIO(fraction of host pool capacity). Requests exceeding this budget are rejected. - No priority on pinned nodes:
pin_prefixdoes not set a priority on the radix tree nodes. All pinned nodes have equal eviction priority and fall back to LRU ordering among themselves when host memory fills. - Requires stack restart for A/B testing: Pins persist in cache across benchmark runs. When comparing pinned vs. unpinned performance, restart the full stack between phases to avoid false cache hits.
See Also
- Agent Hints: Per-request hint reference
- NVIDIA Request Extensions (nvext): Full
nvextfield reference - Router Guide: Router configuration and CLI arguments
- SGLang HiCache: Enabling hierarchical KV cache