Frontend
Frontend
The Dynamo Frontend is the API gateway for serving LLM inference requests. It provides OpenAI-compatible HTTP endpoints and KServe gRPC endpoints, handling request preprocessing, routing, and response formatting.
Feature Matrix
Quick Start
Prerequisites
- Dynamo platform installed
etcdandnats-server -jsrunning- At least one backend worker registered
HTTP Frontend
This starts an OpenAI-compatible HTTP server with integrated pre/post processing and routing. Backends are auto-discovered when they call register_model.
The frontend does the pre and post processing. To do this it will need access to the model configuration files: config.json, tokenizer.json, tokenizer_config.json, etc. It does not need the weights.
Frontend will download the files it needs from Hugging Face, no setup is required. However we recommend setting up modelexpress-server and a shared folder such as a Kubernetes PVC. This ensures the model is only downloaded once across the whole cluster.
If the model is not available on Hugging Face, such as a private or customized model, you will need to make the model files available locally at the same file path as on the backend. The backend’s --model-path <here> will need to exist on the frontend and contain at least the configuration (JSON) files.
KServe gRPC Frontend
See the Frontend Guide for KServe-specific configuration and message formats.
Kubernetes
Configuration
See the Frontend Guide for full configuration options.