Additional ResourcesBackend DetailsSGLang

Encode-Prefill-Decode (EPD) Flow with NIXL

View as Markdown

For high-performance multimodal inference with large embeddings, Dynamo supports a specialized Encode-Prefill-Decode (EPD) flow using NIXL (RDMA) for zero-copy tensor transfer.

Use the Latest Release

We recommend using the latest stable release of dynamo to avoid breaking changes:

GitHub Release

You can find the latest release here and check out the corresponding branch with:

$git checkout $(git describe --tags $(git rev-list --tags --max-count=1))

Multimodal Aggregated Serving

Components

Workflow

The MultimodalEncodeWorker is responsible for encoding the image and passing the embeddings to the MultimodalWorker via a combination of NATS and RDMA. The work complete event is sent via NATS, while the embeddings tensor is transferred via RDMA through the NIXL interface. Its MultimodalWorker then prefills and decodes the prompt, just like the LLM aggregated serving example. By separating the encode from the prefill and decode stages, we can have a more flexible deployment and scale the MultimodalEncodeWorker independently from the prefill and decode workers if needed.

This figure illustrates the workflow:

$cd $DYNAMO_HOME/examples/backends/sglang
$./launch/multimodal_agg.sh

Client

In another terminal:

$curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
> "model": "Qwen/Qwen2.5-VL-7B-Instruct",
> "messages": [
> {
> "role": "user",
> "content": [
> {
> "type": "text",
> "text": "Describe the image."
> },
> {
> "type": "image_url",
> "image_url": {
> "url": "http://images.cocodataset.org/test2017/000000155781.jpg"
> }
> }
> ]
> }
> ],
> "max_tokens": 50,
> "stream": false
> }' | jq

You should see a response similar to this:

1{
2 "id": "chatcmpl-2546f44756884a14916ce13ebaa09da8",
3 "choices": [
4 {
5 "index": 0,
6 "message": {
7 "content": "This image shows a public transit bus on a dimly lit, street-level track in what appears to be a quiet urban neighborhood or suburban area. The bus displays \"OUT OF SERVICE\" in red on its illuminated sign. It is positioned",
8 "role": "assistant",
9 "reasoning_content": null
10 },
11 "finish_reason": "length"
12 }
13 ],
14 "created": 1758824222,
15 "model": "Qwen/Qwen2.5-VL-7B-Instruct",
16 "object": "chat.completion",
17 "usage": {
18 "prompt_tokens": 0,
19 "completion_tokens": 40,
20 "total_tokens": 40
21 }
22}

Multimodal Disaggregated Serving

Components

Workflow

For the Qwen2.5-VL model, embeddings are only required during the prefill stage. As such, the image embeddings are transferred using a NIXL descriptor from the encode worker to the worker and then passed to the prefill worker for processing. The prefill worker performs the prefilling step and forwards the KV cache to the worker for decoding. For more details on the roles of the prefill and decode workers, refer to the LLM disaggregated serving example.

This figure illustrates the workflow:

$cd $DYNAMO_HOME/examples/backends/sglang
$./launch/multimodal_disagg.sh

Client

In another terminal:

$curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
> "model": "Qwen/Qwen2.5-VL-7B-Instruct",
> "messages": [
> {
> "role": "user",
> "content": [
> {
> "type": "text",
> "text": "Describe the image."
> },
> {
> "type": "image_url",
> "image_url": {
> "url": "http://images.cocodataset.org/test2017/000000155781.jpg"
> }
> }
> ]
> }
> ],
> "max_tokens": 50,
> "stream": false
> }' | jq

You should see a response similar to this:

1{
2 "id": "chatcmpl-2546f44756884a14916ce13ebaa09da8",
3 "choices": [
4 {
5 "index": 0,
6 "message": {
7 "content": "This image shows a public transit bus on a dimly lit, street-level track in what appears to be a quiet urban neighborhood or suburban area. The bus displays \"OUT OF SERVICE\" in red on its illuminated sign. It is positioned",
8 "role": "assistant",
9 "reasoning_content": null
10 },
11 "finish_reason": "length"
12 }
13 ],
14 "created": 1758824222,
15 "model": "Qwen/Qwen2.5-VL-7B-Instruct",
16 "object": "chat.completion",
17 "usage": {
18 "prompt_tokens": 0,
19 "completion_tokens": 40,
20 "total_tokens": 40
21 }
22}