Planner Examples
Practical examples for deploying the SLA Planner with different configurations. For deployment concepts, see the Planner Guide. For a quick overview, see the Planner README.
Basic Examples
Minimal DGDR with AIC (Fastest)
The simplest way to deploy with the SLA planner. Uses AI Configurator for offline profiling (20-30 seconds instead of hours):
Deploy:
Online Profiling (Real Measurements)
Standard online profiling runs real GPU measurements for more accurate results. Takes 2-4 hours:
Deploy:
Available sample DGDRs in benchmarks/profiler/deploy/:
profile_sla_dgdr.yaml: Standard online profiling for dense modelsprofile_sla_aic_dgdr.yaml: Fast offline profiling using AI Configuratorprofile_sla_moe_dgdr.yaml: Online profiling for MoE models (SGLang)
Profiling Config Cases: Prior to 0.8.1, fields under
profilingConfig.configuse snake_case. Starting 0.8.1, fields use camelCase. There is backwards compatibility to snake_case, but example DGDRs use camelCase.
Kubernetes Examples
MoE Models (SGLang)
For Mixture-of-Experts models like DeepSeek-R1, use SGLang backend:
Deploy:
Using Existing DGD Configs (Custom Setups)
Reference an existing DynamoGraphDeployment config via ConfigMap:
Step 1: Create ConfigMap from your DGD config:
Step 2: Reference it in your DGDR:
The profiler uses the DGD config from the ConfigMap as a base template, then optimizes it based on your SLA targets. The controller automatically injects spec.model and spec.backend into the final configuration.
Inline Configuration (Simple Use Cases)
For simple use cases without a custom DGD config, provide profiler configuration directly. The profiler auto-generates a basic DGD configuration:
Mocker Deployment (Testing)
Deploy a mocker backend that simulates GPU timing behavior without real GPUs. Useful for:
- Large-scale experiments without GPU resources
- Testing planner behavior and infrastructure
- Validating deployment configurations
Profiling runs against the real backend (via GPUs or AIC). The mocker deployment then uses profiling data to simulate realistic timing.
Model Cache PVC (0.8.1+)
For large models, use a pre-populated PVC instead of downloading from HuggingFace:
See SLA-Driven Profiling for configuration details.
Advanced Examples
Custom Load Predictors
Warm-starting with Trace Data
Pre-load predictors with historical request patterns before live traffic:
The trace file should be in mooncake-style JSONL format with request-count, ISL, and OSL samples.
Kalman Filter Tuning
For workloads with rapid changes, tune the Kalman filter:
Prophet for Seasonal Workloads
For workloads with daily/weekly patterns:
Virtual Connector
For non-Kubernetes environments, use the VirtualConnector to communicate scaling decisions:
See components/planner/test/test_virtual_connector.py for a full working example.
Planner Configuration Passthrough
Pass planner-specific settings through the DGDR:
Review Before Deploy (autoApply: false)
Disable auto-deployment to inspect the generated DGD:
After profiling completes:
Profiling Artifacts with PVC
Save detailed profiling artifacts (plots, logs, raw data) to a PVC:
Setup:
Access results:
Related Documentation
- Planner README — Overview and quick start
- Planner Guide — Deployment, configuration, integration
- Planner Design — Architecture deep-dive
- DGDR Configuration Reference
- SLA-Driven Profiling