Logging

View as Markdown

This guide demonstrates how to set up logging for Dynamo in Kubernetes using Grafana Loki and Alloy. This setup provides a simple reference logging setup that can be followed in Kubernetes clusters including Minikube and MicroK8s.

[!Note] This setup is intended for development and testing purposes. For production environments, please refer to the official documentation for high-availability configurations.

Components Overview

  • Grafana Loki: Fast and cost-effective Kubernetes-native log aggregation system.

  • Grafana Alloy: OpenTelemetry collector that replaces Promtail, gathering logs, metrics and traces from Kubernetes pods.

  • Grafana: Visualization platform for querying and exploring logs.

Prerequisites

1. Dynamo Kubernetes Platform

This guide assumes you have installed Dynamo Kubernetes Platform. For more information, see Dynamo Kubernetes Platform.

2. Kube-prometheus

While this guide does not use Prometheus, it assumes Grafana is pre-installed with the kube-prometheus. For more information, see kube-prometheus.

3. Environment Variables

Kubernetes Setup Variables

The following env variables are set:

  • MONITORING_NAMESPACE: The namespace where Loki is installed
  • DYN_NAMESPACE: The namespace where Dynamo Kubernetes Platform is installed
$export MONITORING_NAMESPACE=monitoring
$export DYN_NAMESPACE=dynamo-system

Dynamo Logging Variables

VariableDescriptionExample
DYN_LOGGING_JSONLEnable JSONL logging format (required for Loki)true
DYN_LOGLog levels per target <default_level>,<module_path>=<level>,<module_path>=<level>DYN_LOG=info,dynamo_runtime::system_status_server:trace
DYN_LOG_USE_LOCAL_TZUse local timezone for timestampstrue

Installation Steps

1. Install Loki

First, we’ll install Loki in single binary mode, which is ideal for testing and development:

$# Add the Grafana Helm repository
$helm repo add grafana https://grafana.github.io/helm-charts
$helm repo update
$
$# Install Loki
$helm install --values deploy/observability/k8s/logging/values/loki-values.yaml loki grafana/loki -n $MONITORING_NAMESPACE

Our configuration (loki-values.yaml) sets up Loki in a simple configuration that is suitable for testing and development. It uses a local MinIO for storage. The installation pods can be viewed with:

$kubectl get pods -n $MONITORING_NAMESPACE -l app=loki

2. Install Grafana Alloy

Next, install the Grafana Alloy collector to gather logs from your Kubernetes cluster and forward them to Loki. Here we use the Helm chart k8s-monitoring provided by Grafana to install the collector:

$# Generate a custom values file with the namespace information
$envsubst < deploy/observability/k8s/logging/values/alloy-values.yaml > alloy-custom-values.yaml
$
$# Install the collector
$helm install --values alloy-custom-values.yaml alloy grafana/k8s-monitoring -n $MONITORING_NAMESPACE

The values file (alloy-values.yaml) includes the following configurations for the collector:

  • Destination to forward logs to Loki
  • Namespace to collect logs from
  • Pod labels to be mapped to Loki labels
  • Collection method (kubernetesApi or tailing /var/log/containers/)
1destinations:
2- name: loki
3 type: loki
4 url: http://loki-gateway.$MONITORING_NAMESPACE.svc.cluster.local/loki/api/v1/push
5podLogs:
6 enabled: true
7 gatherMethod: kubernetesApi # collect logs from the kubernetes api, rather than /var/log/containers/; friendly for testing and development
8 collector: alloy-logs
9 labels:
10 app_kubernetes_io_name: app.kubernetes.io/name
11 nvidia_com_dynamo_component_type: nvidia.com/dynamo-component-type
12 nvidia_com_dynamo_graph_deployment_name: nvidia.com/dynamo-graph-deployment-name
13 labelsToKeep:
14 - "app_kubernetes_io_name"
15 - "container"
16 - "instance"
17 - "job"
18 - "level"
19 - "namespace"
20 - "service_name"
21 - "service_namespace"
22 - "deployment_environment"
23 - "deployment_environment_name"
24 - "nvidia_com_dynamo_component_type" # extract this label from the dynamo graph deployment
25 - "nvidia_com_dynamo_graph_deployment_name" # extract this label from the dynamo graph deployment
26 namespaces:
27 - $DYN_NAMESPACE

3. Configure Grafana with the Loki datasource and Dynamo Logs dashboard

We will be viewing the logs associated with our DynamoGraphDeployment in Grafana. To do this, we need to configure Grafana with the Loki datasource and Dynamo Logs dashboard.

Since we are using Grafana with the Prometheus Operator, we can simply apply the following ConfigMaps to quickly achieve this configuration.

$# Configure Grafana with the Loki datasource
$envsubst < deploy/observability/k8s/logging/grafana/loki-datasource.yaml | kubectl apply -n $MONITORING_NAMESPACE -f -
$
$# Configure Grafana with the Dynamo Logs dashboard
$envsubst < deploy/observability/k8s/logging/grafana/logging-dashboard.yaml | kubectl apply -n $MONITORING_NAMESPACE -f -

[!Note] If using Grafana installed without the Prometheus Operator, you can manually import the Loki datasource and Dynamo Logs dashboard using the Grafana UI.

4. Deploy a DynamoGraphDeployment with JSONL Logging

At this point, we should have everything in place to collect and view logs in our Grafana instance. All that is left is to deploy a DynamoGraphDeployment to collect logs from.

To enable structured logs in a DynamoGraphDeployment, we need to set the DYN_LOGGING_JSONL environment variable to 1. This is done for us in the agg_logging.yaml setup for the Sglang backend. We can now deploy the DynamoGraphDeployment with:

$kubectl apply -n $DYN_NAMESPACE -f examples/backends/sglang/deploy/agg_logging.yaml

Send a few chat completions requests to generate structured logs across the frontend and worker pods across the DynamoGraphDeployment. We are now all set to view the logs in Grafana.

Viewing Logs in Grafana

Port-forward the Grafana service to access the UI:

$kubectl port-forward svc/prometheus-grafana 3000:80 -n $MONITORING_NAMESPACE

If everything is working, under Home > Dashboards > Dynamo Logs, you should see a dashboard that can be used to view the logs associated with our DynamoGraphDeployments

The dashboard enables filtering by DynamoGraphDeployment, namespace, and component type (e.g., frontend, worker, etc.).