CLI Reference
dynamo-run is a Rust binary that lets you easily run a model, explore the Dynamo components, and demonstrates the Rust API. It supports the mistral.rs engines, as well as testing engines echo and mocker.
It is primarily for development and rapid prototyping. For production use we recommend the Python wrapped components, see the main project README.
Basics
Usage: See dynamo-run --help
Example: dynamo-run Qwen/Qwen3-0.6B
Set the environment variable DYN_LOG to adjust the logging level; for example, export DYN_LOG=debug. It has the same syntax as RUST_LOG.
To adjust verbosity, use -v to enable debug logging or -vv to enable full trace logging. For example:
Use model from Hugging Face
To automatically download Qwen3 4B from Hugging Face (16 GiB download) and to start it in interactive text mode:
The general format for HF download follows this pattern:
For gated models (such as meta-llama/Llama-3.2-3B-Instruct), you must set an HF_TOKEN environment variable.
The parameter can be the ID of a HuggingFace repository (which will be downloaded) or a folder containing safetensors, config.json, or similar (perhaps a locally checked out HuggingFace repository).
Run a model from local file
To run a model from local file:
- Download the model from Hugging Face
- Run the model from local file
See the following sections for details.
Download model from Hugging Face
This model available from Hugging Face should be high quality and fast on almost any machine: https://huggingface.co/Qwen/Qwen3-0.6B
To run the model:
Text interface
You can also pipe a prompt into dynamo-run:
HTTP interface
You can also list models or send a request:
List the models
Send a request
Distributed System
You can run the ingress side (HTTP server and pre-processing) on one machine, for example a CPU node, and the worker on a different machine (a GPU node).
You will need etcd and nats with jetstream installed and accessible from both nodes. For development I run NATS like this: nats-server -js --trace --store_dir $(mktemp -d).
Node 1: OpenAI compliant HTTP server, optional pre-processing, worker discovery:
Node 2: Engine. Receives and returns requests over the network:
This uses etcd to auto-discover the model and NATS to talk to it. You can
run multiple instances on the same endpoint; it picks one based on the
--router-mode (round-robin by default if left unspecified).
Run dynamo-run --help for more options.
Network names
The in=dyn:// URLs have the format dyn://namespace.component.endpoint. For quickstart just use any string dyn://test, dynamo-run will default any missing parts for you. The pieces matter for a larger system.
- Namespace: A pipeline. Usually a model. e.g “llama_8b”. Just a name.
- Component: A load balanced service needed to run that pipeline. “backend”, “prefill”, “decode”, “preprocessor”, “draft”, etc. This typically has some configuration (which model to use, for example).
- Endpoint: Like a URL. “generate”, “load_metrics”.
- Instance: A process. Unique. Dynamo assigns each one a unique instance_id. The thing that is running is always an instance. Namespace/component/endpoint can refer to multiple instances.
If you run two models, that is two pipelines. An exception would be if doing speculative decoding. The draft model is part of the pipeline of a bigger model.
If you run two instances of the same model (“data parallel”) they are the same namespace+component+endpoint but different instances. The router will spread traffic over all the instances of a namespace+component+endpoint. If you have four prefill workers in a pipeline, they all have the same namespace+component+endpoint and are automatically assigned unique instance_ids.
Example 1: Data parallel load balanced, one model one pipeline two instances.
Example 2: Two models, two pipelines.
Example 3: Different endpoints.
The KV metrics publisher in VLLM adds a load_metrics endpoint to the current component. If the llama3-1-8b.backend component above is using patched vllm it will also expose llama3-1-8b.backend.load_metrics.
Example 4: Multiple component in a pipeline.
In the P/D disaggregated setup you would have deepseek-distill-llama8b.prefill.generate (possibly multiple instances of this) and deepseek-distill-llama8b.decode.generate.
For output it is always only out=auto. This tells Dynamo to auto-discover the instances, group them by model, and load balance appropriately (depending on --router-mode flag).
KV-aware routing
The only difference from the distributed system above is --router-mode kv. vllm announces when a KV block is created or removed. The Dynamo router finds the worker with the best match for those KV blocks and directs the traffic to that node.
For performance testing, compare a typical workload with --router-mode random|round-robin to see if it can benefit from KV-aware routing.
The KV-aware routing arguments:
-
--kv-overlap-score-weight: Sets the amount of weighting on overlaps with prefix caches, which directly contributes to the prefill cost. A large weight is expected to yield a better TTFT (at the expense of worse ITL). When set to 0, prefix caches are not considered at all (falling back to pure load balancing behavior on the active blocks). -
--router-temperature: Sets the temperature when randomly selecting workers to route to via softmax sampling on the router cost logits. Setting it to 0 recovers the deterministic behavior where the min logit is picked. -
--use-kv-events: Sets whether to listen to KV events for maintaining the global view of cached blocks. If true, the router uses KV events to track block creation and deletion from workers. If false, the router predicts cache state based on routing decisions with TTL-based expiration (default 120s) and pruning. Set false if your backend engine does not emit KV events.
Request Migration
In a Distributed System, you can enable request migration to handle worker failures gracefully. Use the --migration-limit flag to specify how many times a request can be migrated to another worker:
This allows a request to be migrated up to 3 times before failing. See the Request Migration Architecture documentation for details on how this works.
Request Cancellation
When using the HTTP interface (in=http), if the HTTP request connection is dropped by the client, Dynamo automatically cancels the downstream request to the worker. This ensures that computational resources are not wasted on generating responses that are no longer needed.
For detailed information about how request cancellation works across the system, see the Request Cancellation Architecture documentation.
Development
dynamo-run is also an example of what can be built in Rust with the dynamo-llm and dynamo-runtime crates. The following guide shows how to build from source with all the features.
Step 1: Install libraries
Ubuntu:
macOS:
If Metal is accessible, you should see an error like metal: error: no input files, which confirms it is installed correctly.
Step 2: Install Rust
Step 3: Build
- Linux with GPU and CUDA (tested on Ubuntu):
- macOS with Metal:
- CPU only:
Optionally you can run cargo build from any location with arguments:
The binary is called dynamo-run in target/debug
Build with --release for a smaller binary and better performance, but longer build times. The binary will be in target/release.
Engines
The input defaults to in=text. The output defaults to out=mistralrs engine, unless it is disabled with --no-default-features in which case an engine that echo’s back your input is used.
mistralrs
mistral.rs is a pure Rust engine that is fast to run and fast to load, and runs well on CPU as well as GPU. For those reasons it is the default engine.
is equivalent to
If you have multiple GPUs, mistral.rs does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it.
Mocker engine
The mocker engine is a mock vLLM implementation designed for testing and development purposes. It simulates realistic token generation timing without requiring actual model inference, making it useful for:
- Testing distributed system components without GPU resources
- Benchmarking infrastructure and networking overhead
- Developing and debugging Dynamo components
- Load testing and performance analysis
Basic usage:
The --model-path is required but can point to any valid model path - the mocker doesn’t actually load the model weights (but the pre-processor needs the tokenizer). The arguments block_size, num_gpu_blocks, max_num_seqs, max_num_batched_tokens, enable_prefix_caching, and enable_chunked_prefill are common arguments shared with the real VLLM engine.
And below are arguments that are mocker-specific:
speedup_ratio: Speed multiplier for token generation (default: 1.0). Higher values make the simulation engines run faster.dp_size: Number of data parallel workers to simulate (default: 1)watermark: KV cache watermark threshold as a fraction (default: 0.01). This argument also exists for the real VLLM engine but cannot be passed as an engine arg.
echo
The echo engine echoes the prompt back as the response.
The echo engine uses a configurable delay between tokens to simulate generation speed. You can adjust this using the DYN_TOKEN_ECHO_DELAY_MS environment variable:
The default delay is 10ms, which produces approximately 100 tokens per second.
Other engines, multi-node, production
vllm, sglang and trtllm production grade engines are available in examples/backends. They run as Python components, using the Rust bindings. See the main README.
dynamo-run is an exploration, development and prototyping tool, as well as an example of using the Rust API. Multi-node and production setups should be using the main engine components.
Batch mode
dynamo-run can take a jsonl file full of prompts and evaluate them all:
The input file should look like this:
Each one is passed as a prompt to the model. The output is written back to the same folder in output.jsonl. At the end of the run some statistics are printed.
The output looks like this:
Writing your own engine in Python
The dynamo Python library allows you to build your own engine and attach it to Dynamo. All of the main backend components in examples/backends/ work like this.
The Python file must do three things:
- Decorate a function to get the runtime
- Register on the network
- Attach a request handler
The model_path can be:
- A HuggingFace repo ID, optionally prefixed with
hf://. It is downloaded and cached locally. - The path to a checkout of a HuggingFace repo - any folder containing safetensor files as well as
config.json,tokenizer.jsonandtokenizer_config.json.
The model_input can be:
- ModelInput.Tokens. Your engine expects pre-processed input (token IDs). Dynamo handles tokenization and pre-processing.
- ModelInput.Text. Your engine expects raw text input and handles its own tokenization and pre-processing.
The model_type can be:
- ModelType.Chat. Your
generatemethod receives arequestand must return a response dict of type OpenAI Chat Completion. - ModelType.Completions. Your
generatemethod receives arequestand must return a response dict of the older Completions.
register_llm can also take the following kwargs:
model_name: The name to call the model. Your incoming HTTP requests model name must match this. Defaults to the hugging face repo name, or the folder name.context_length: Max model length in tokens. Defaults to the model’s set max. Only set this if you need to reduce KV cache allocation to fit into VRAM.kv_cache_block_size: Size of a KV block for the engine, in tokens. Defaults to 16.user_data: Optional dictionary containing custom metadata for worker behavior (e.g., LoRA configuration). Defaults to None.
Here are some example engines:
More fully-featured Python engines are in examples/backends.
Debugging
dynamo-run and dynamo-runtime support tokio-console. Build with the feature to enable:
The listener uses the default tokio console port, and all interfaces (0.0.0.0).