Running Meta-Llama-3.1-8B-Instruct with Speculative Decoding (Eagle3)
This guide walks through how to deploy Meta-Llama-3.1-8B-Instruct using aggregated speculative decoding with Eagle3 on a single node. Since the model is only 8B parameters, you can run it on any GPU with at least 16GB VRAM.
Step 1: Set Up Your Docker Environment
First, we’ll initialize a Docker container using the VLLM backend. You can refer to the VLLM Quickstart Guide — or follow the full steps below.
1. Launch Docker Compose
2. Build the Container
3. Run the Container
Step 2: Get Access to the Llama-3 Model
The Meta-Llama-3.1-8B-Instruct model is gated, so you’ll need to request access on Hugging Face. Go to the official Meta-Llama-3.1-8B-Instruct repository and fill out the access form. Approval usually takes around 5 minutes.
Once you have access, generate a Hugging Face access token with permission for gated repositories, then set it inside your container:
Step 3: Run Aggregated Speculative Decoding
Now that your environment is ready, start the aggregated server with speculative decoding.
Once the weights finish downloading and serving begins, you’ll be ready to send inference requests to your model.
Step 4: Example Request
To verify your setup, try sending a simple prompt to your model: