Speculative Decoding
Speculative decoding is an optimization technique that uses a smaller “draft” model to predict multiple tokens, which are then verified by the main model in parallel. This can significantly reduce latency for autoregressive generation.
Backend Support
Overview
Speculative decoding works by:
- Draft phase: A smaller, faster model generates candidate tokens
- Verify phase: The main model verifies these candidates in a single forward pass
- Accept/reject: Tokens are accepted if they match what the main model would have generated
This approach trades off additional compute for lower latency, as multiple tokens can be generated per forward pass of the main model.
Quick Start (vLLM + Eagle3)
This guide walks through deploying Meta-Llama-3.1-8B-Instruct with Eagle3 speculative decoding on a single GPU with at least 16GB VRAM.
Prerequisites
- Start infrastructure services:
- Build and run the vLLM container:
- Set up Hugging Face access (Meta-Llama-3.1-8B-Instruct is gated):
Run Speculative Decoding
Test the Deployment
Backend-Specific Guides
See Also
- vLLM Backend - Full vLLM deployment guide
- Disaggregated Serving - Alternative optimization approach
- Meta-Llama-3.1-8B-Instruct on Hugging Face