Additional Resources

Speculative Decoding

View as Markdown

Speculative decoding is an optimization technique that uses a smaller “draft” model to predict multiple tokens, which are then verified by the main model in parallel. This can significantly reduce latency for autoregressive generation.

Backend Support

BackendStatusNotes
vLLMEagle3 draft model support
SGLang🚧Not yet documented
TensorRT-LLM🚧Not yet documented

Overview

Speculative decoding works by:

  1. Draft phase: A smaller, faster model generates candidate tokens
  2. Verify phase: The main model verifies these candidates in a single forward pass
  3. Accept/reject: Tokens are accepted if they match what the main model would have generated

This approach trades off additional compute for lower latency, as multiple tokens can be generated per forward pass of the main model.

Quick Start (vLLM + Eagle3)

This guide walks through deploying Meta-Llama-3.1-8B-Instruct with Eagle3 speculative decoding on a single GPU with at least 16GB VRAM.

Prerequisites

  1. Start infrastructure services:
$docker compose -f deploy/docker-compose.yml up -d
  1. Build and run the vLLM container:
$./container/build.sh --framework VLLM
$./container/run.sh -it --framework VLLM --mount-workspace
  1. Set up Hugging Face access (Meta-Llama-3.1-8B-Instruct is gated):
$export HUGGING_FACE_HUB_TOKEN="your_token_here"
$export HF_TOKEN=$HUGGING_FACE_HUB_TOKEN

Run Speculative Decoding

$cd examples/backends/vllm
$bash launch/agg_spec_decoding.sh

Test the Deployment

$curl http://localhost:8000/v1/chat/completions \
> -H "Content-Type: application/json" \
> -d '{
> "model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
> "messages": [
> {"role": "user", "content": "Write a poem about why Sakura trees are beautiful."}
> ],
> "max_tokens": 250
> }'

Backend-Specific Guides

See Also