# Dynamo Support Matrix This document provides the support matrix for Dynamo, including hardware, software and build instructions. **See also:** [Release Artifacts](/dynamo/v-0-9-0/getting-started/release-artifacts) for container images, wheels, Helm charts, and crates | [Feature Matrix](/dynamo/v-0-9-0/getting-started/feature-matrix) for backend feature support ## Backend Dependencies The following table shows the backend framework versions included with each Dynamo release: | **Dynamo** | **vLLM** | **SGLang** | **TensorRT-LLM** | **NIXL** | | :--- | :--- | :--- | :--- | :--- | | **v0.9.0** | `0.14.1` | `0.5.8` | `1.3.0rc1` | `0.9.0` | | **v0.8.1.post3**\* | `0.12.0` | `0.5.6.post2` | `1.2.0rc6.post3` | `0.8.0` | | **v0.8.1.post2**\* | `0.12.0` | `0.5.6.post2` | `1.2.0rc6.post2` | `0.8.0` | | **v0.8.1.post1**\* | `0.12.0` | `0.5.6.post2` | `1.2.0rc6.post1` | `0.8.0` | | **v0.8.1** | `0.12.0` | `0.5.6.post2` | `1.2.0rc6.post1` | `0.8.0` | | **v0.8.0** | `0.12.0` | `0.5.6.post2` | `1.2.0rc6.post1` | `0.8.0` | | **v0.7.1** | `0.11.0` | `0.5.4.post3` | `1.2.0rc3` | `0.8.0` | | **v0.7.0.post1** | `0.11.0` | `0.5.4.post3` | `1.2.0rc3` | `0.8.0` | | **v0.7.0** | `0.11.0` | `0.5.4.post3` | `1.2.0rc2` | `0.8.0` | | **v0.6.1.post1** | `0.11.0` | `0.5.3.post2` | `1.1.0rc5` | `0.6.0` | | **v0.6.1** | `0.11.0` | `0.5.3.post2` | `1.1.0rc5` | `0.6.0` | | **v0.6.0** | `0.11.0` | `0.5.3.post2` | `1.1.0rc5` | `0.6.0` | \* Limited experimental releases -- only TRT-LLM container and Dynamo PyPI wheels were published. ### Version Labels - **v0.9.0** is the current release. ### Version Compatibility - Backend versions listed are the only versions tested and supported for each release. - TensorRT-LLM does not support Python 3.11; installation of the `ai-dynamo[trtllm]` wheel will fail on Python 3.11. ### CUDA Versions by Backend | **Dynamo** | **vLLM** | **SGLang** | **TensorRT-LLM** | **Notes** | | :--- | :--- | :--- | :--- | :--- | | **v0.9.0** | `12.9`, `13.0` | `12.9`, `13.0` | `13.0` | Experimental vLLM/SGLang CUDA 13 support | | **v0.8.1** | `12.9`, `13.0` | `12.9`, `13.0` | `13.0` | Experimental vLLM/SGLang CUDA 13 support | | **v0.8.0** | `12.9`, `13.0` | `12.9`, `13.0` | `13.0` | Experimental vLLM/SGLang CUDA 13 support | | **v0.7.1** | `12.9` | `12.8` | `13.0` | | | **v0.7.0** | `12.8` | `12.9` | `13.0` | TensorRT-LLM CUDA 13 support - CUDA 12.9 deprecated | | **v0.6.1** | `12.8` | `12.9` | `12.9` | | | **v0.6.0** | `12.8` | `12.8` | `12.9` | | Patch versions (e.g., v0.8.1.post1, v0.7.0.post1) have the same CUDA support as their base version. For detailed artifact versions and NGC links (including container images, Python wheels, Helm charts, and Rust crates), see the [Release Artifacts](/dynamo/v-0-9-0/getting-started/release-artifacts) page. ## Hardware Compatibility | **CPU Architecture** | **Status** | | :------------------- | :----------- | | **x86_64** | Supported | | **ARM64** | Supported | Dynamo provides multi-arch container images supporting both AMD64 (x86_64) and ARM64 architectures. See [Release Artifacts](/dynamo/v-0-9-0/getting-started/release-artifacts) for available images. ### GPU Compatibility If you are using a **GPU**, the following GPU models and architectures are supported: | **GPU Architecture** | **Status** | | :----------------------------------- | :--------- | | **NVIDIA Blackwell Architecture** | Supported | | **NVIDIA Hopper Architecture** | Supported | | **NVIDIA Ada Lovelace Architecture** | Supported | | **NVIDIA Ampere Architecture** | Supported | ## Platform Architecture Compatibility **Dynamo** is compatible with the following platforms: | **Operating System** | **Version** | **Architecture** | **Status** | | :------------------- | :---------- | :--------------- | :----------- | | **Ubuntu** | 22.04 | x86_64 | Supported | | **Ubuntu** | 24.04 | x86_64 | Supported | | **Ubuntu** | 24.04 | ARM64 | Supported | | **CentOS Stream** | 9 | x86_64 | Experimental | Wheels are built using a manylinux_2_28-compatible environment and validated on CentOS Stream 9 and Ubuntu (22.04, 24.04). Compatibility with other Linux distributions is expected but not officially verified. > [!Caution] > KV Block Manager is supported only with Python 3.12. Python 3.12 support is currently limited to Ubuntu 24.04. ## Software Compatibility ### CUDA and Driver Requirements Dynamo container images include CUDA toolkit libraries. The host machine must have a compatible NVIDIA GPU driver installed. | Dynamo Version | Backend | CUDA Toolkit | Min Driver (Linux) | Min Driver (Windows) | Notes | | :--- | :--- | :--- | :--- | :--- | :--- | | **0.9.0** | **vLLM** | 12.9 | 575.xx+ | 576.xx+ | | | | | 13.0 | 580.xx+ | 581.xx+ | Experimental | | | **SGLang** | 12.9 | 575.xx+ | 576.xx+ | | | | | 13.0 | 580.xx+ | 581.xx+ | Experimental | | | **TensorRT-LLM** | 13.0 | 580.xx+ | 581.xx+ | | | **0.8.1** | **vLLM** | 12.9 | 575.xx+ | 576.xx+ | | | | | 13.0 | 580.xx+ | 581.xx+ | Experimental | | | **SGLang** | 12.9 | 575.xx+ | 576.xx+ | | | | | 13.0 | 580.xx+ | 581.xx+ | Experimental | | | **TensorRT-LLM** | 13.0 | 580.xx+ | 581.xx+ | | | **0.8.0** | **vLLM** | 12.9 | 575.xx+ | 576.xx+ | | | | | 13.0 | 580.xx+ | 581.xx+ | Experimental | | | **SGLang** | 12.9 | 575.xx+ | 576.xx+ | | | | | 13.0 | 580.xx+ | 581.xx+ | Experimental | | | **TensorRT-LLM** | 13.0 | 580.xx+ | 581.xx+ | | | **0.7.1** | **vLLM** | 12.9 | 575.xx+ | 576.xx+ | | | | **SGLang** | 12.8 | 570.xx+ | 571.xx+ | | | | **TensorRT-LLM** | 13.0 | 580.xx+ | 581.xx+ | | | **0.7.0** | **vLLM** | 12.8 | 570.xx+ | 571.xx+ | | | | **SGLang** | 12.9 | 575.xx+ | 576.xx+ | | | | **TensorRT-LLM** | 13.0 | 580.xx+ | 581.xx+ | | Experimental CUDA 13 images are not published for all versions. Check [Release Artifacts](/dynamo/v-0-9-0/getting-started/release-artifacts) for availability. #### CUDA Compatibility Resources For detailed information on CUDA driver compatibility, forward compatibility, and troubleshooting: - [CUDA Compatibility Overview](https://docs.nvidia.com/deploy/cuda-compatibility/) - [Why CUDA Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/why-cuda-compatibility.html) - [Minor Version Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/minor-version-compatibility.html) - [Forward Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/forward-compatibility.html) - [FAQ](https://docs.nvidia.com/deploy/cuda-compatibility/frequently-asked-questions.html) For extended driver compatibility beyond the minimum versions listed above, consider using `cuda-compat` packages on the host. See [Forward Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/forward-compatibility.html) for details. ## Cloud Service Provider Compatibility ### AWS | **Host Operating System** | **Version** | **Architecture** | **Status** | | :------------------------ | :---------- | :--------------- | :--------- | | **Amazon Linux** | 2023 | x86_64 | Supported | > [!Caution] > **AL2023 TensorRT-LLM Limitation:** There is a known issue with the TensorRT-LLM framework when running the AL2023 container locally with `docker run --network host ...` due to a [bug](https://github.com/mpi4py/mpi4py/discussions/491#discussioncomment-12660609) in mpi4py. To avoid this issue, replace the `--network host` flag with more precise networking configuration by mapping only the necessary ports (e.g., 4222 for nats, 2379/2380 for etcd, 8000 for frontend). ## Build Support For version-specific artifact details, installation commands, and release history, see [Release Artifacts](/dynamo/v-0-9-0/getting-started/release-artifacts). **Dynamo** currently provides build support in the following ways: - **Wheels**: We distribute Python wheels of Dynamo and KV Block Manager: - [ai-dynamo](https://pypi.org/project/ai-dynamo/) - [ai-dynamo-runtime](https://pypi.org/project/ai-dynamo-runtime/) - [kvbm](https://pypi.org/project/kvbm/) as a standalone implementation. - **Dynamo Container Images**: We distribute multi-arch images (x86 & ARM64 compatible) on [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/collections/ai-dynamo): - [Dynamo Frontend](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/dynamo-frontend) - [SGLang Runtime](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/sglang-runtime) - [SGLang Runtime (CUDA 13)](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/sglang-runtime-cu13) - [TensorRT-LLM Runtime](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/tensorrtllm-runtime) - [vLLM Runtime](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/vllm-runtime) - [vLLM Runtime (CUDA 13)](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/vllm-runtime-cu13) - [Kubernetes Operator](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/kubernetes-operator) - **Helm Charts**: [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/collections/ai-dynamo) hosts the helm charts supporting Kubernetes deployments of Dynamo: - [Dynamo CRDs](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/helm-charts/dynamo-crds) - [Dynamo Platform](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/helm-charts/dynamo-platform) - ~~[Dynamo Graph](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/helm-charts/dynamo-graph)~~ *(Deprecated in v0.9.0)* - **Rust Crates**: - [dynamo-runtime](https://crates.io/crates/dynamo-runtime/) - [dynamo-llm](https://crates.io/crates/dynamo-llm/) - [dynamo-async-openai](https://crates.io/crates/dynamo-async-openai/) - [dynamo-parsers](https://crates.io/crates/dynamo-parsers/) - [dynamo-config](https://crates.io/crates/dynamo-config/) - [dynamo-memory](https://crates.io/crates/dynamo-memory/) - [dynamo-tokens](https://crates.io/crates/dynamo-tokens/) Once you've confirmed that your platform and architecture are compatible, you can install **Dynamo** by following the [Local Quick Start](https://github.com/ai-dynamo/dynamo/blob/main/README.md#local-quick-start) in the README.