Webhooks

View as Markdown

This document describes the webhook functionality in the Dynamo Operator, including validation webhooks, certificate management, and troubleshooting.

Table of Contents


Overview

The Dynamo Operator uses Kubernetes admission webhooks to provide real-time validation and mutation of custom resources. Currently, the operator implements validation webhooks that ensure invalid configurations are rejected immediately at the API server level, providing faster feedback to users compared to controller-based validation.

All webhook types (validating, mutating, conversion, etc.) share the same webhook server and TLS certificate infrastructure, making certificate management consistent across all webhook operations.

Key Features

  • Always enabled - Webhooks are a required component of the operator
  • Shared certificate infrastructure - All webhook types use the same TLS certificates
  • Automatic certificate generation - No manual certificate management required
  • cert-manager integration - Optional integration for automated certificate lifecycle
  • Multi-operator support - Lease-based coordination for cluster-wide and namespace-restricted deployments
  • Immutability enforcement - Critical fields protected via CEL validation rules

Current Webhook Types

  • Validating Webhooks: Validate custom resource specifications before persistence
    • DynamoComponentDeployment validation
    • DynamoGraphDeployment validation
    • DynamoModel validation
    • DynamoGraphDeploymentRequest validation
  • Mutating Webhooks: Apply default values to resources on creation
    • DynamoGraphDeployment defaulting

Note: All webhook types use the same certificate infrastructure described in this document.


Architecture

┌─────────────────────────────────────────────────────────────────┐
│ API Server │
│ 1. User submits CR (kubectl apply) │
│ 2. API server calls MutatingWebhookConfiguration │
└────────────────────────┬────────────────────────────────────────┘
│ HTTPS (TLS required)
┌─────────────────────────────────────────────────────────────────┐
│ Webhook Server (in Operator Pod) │
│ 3. Applies defaults (e.g., operator version annotation) │
│ 4. Returns mutated CR │
└────────────────────────┬────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ API Server │
│ 5. API server calls ValidatingWebhookConfiguration │
└────────────────────────┬────────────────────────────────────────┘
│ HTTPS (TLS required)
┌─────────────────────────────────────────────────────────────────┐
│ Webhook Server (in Operator Pod) │
│ 6. Validates CR against business rules │
│ 7. Returns admit/deny decision + warnings │
└────────────────────────┬────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ API Server │
│ 8. If admitted: Persist CR to etcd │
│ 9. If denied: Return error to user │
└─────────────────────────────────────────────────────────────────┘

Admission Flow

  1. Mutating webhooks: Apply defaults and transformations before validation
  2. Validating webhooks: Validate the (possibly mutated) CR against business rules
  3. CEL validation: Kubernetes-native immutability checks (always active)

Upgrading from versions with webhook.enabled: false

The webhook.enabled Helm value has been removed. Webhooks are now a required component of the operator and are always active. If you previously ran with webhook.enabled: false, take the following steps before upgrading:

  1. Remove webhook.enabled from any custom values files. Helm will ignore the unknown key, but it should be cleaned up to avoid confusion.
  2. Ensure port 9443 is reachable from the Kubernetes API server to the operator pod. If you have NetworkPolicy rules or firewall configurations restricting traffic, add an ingress rule allowing the API server to reach the webhook server on port 9443.
  3. Ensure webhook TLS certificates are available. By default, Helm hooks generate self-signed certificates automatically during helm upgrade — no action needed. If you use cert-manager or externally managed certificates, verify your configuration is in place before upgrading.

Configuration

Certificate Management Options

The operator supports three certificate management modes:

ModeDescriptionUse Case
Automatic (Default)Helm hooks generate self-signed certificatesTesting and development environments
cert-managerIntegrate with cert-manager for automated lifecycleProduction deployments with cert-manager
ExternalBring your own certificatesProduction deployments with custom PKI

Advanced Configuration

Complete Configuration Reference

1dynamo-operator:
2 webhook:
3 # Certificate management
4 certManager:
5 enabled: false
6 issuerRef:
7 kind: Issuer
8 name: selfsigned-issuer
9
10 # Certificate secret configuration
11 certificateSecret:
12 name: webhook-server-cert
13 external: false
14
15 # Certificate validity period (automatic generation only)
16 certificateValidity: 3650 # 10 years
17
18 # Certificate generator image (automatic generation only)
19 certGenerator:
20 image:
21 repository: bitnami/kubectl
22 tag: latest
23
24 # Webhook behavior configuration
25 failurePolicy: Fail # Fail (reject on error) or Ignore (allow on error)
26 timeoutSeconds: 10 # Webhook timeout
27
28 # Namespace filtering (advanced)
29 namespaceSelector: {} # Kubernetes label selector for namespaces

Failure Policy

1# Fail: Reject resources if webhook is unavailable (recommended for production)
2webhook:
3 failurePolicy: Fail
4
5# Ignore: Allow resources if webhook is unavailable (use with caution)
6webhook:
7 failurePolicy: Ignore

Recommendation: Use Fail in production to ensure validation is always enforced. Only use Ignore if you need high availability and can tolerate occasional invalid resources.

Namespace Filtering

Control which namespaces are validated (applies to cluster-wide operator only):

1# Only validate resources in namespaces with specific labels
2webhook:
3 namespaceSelector:
4 matchLabels:
5 dynamo-validation: enabled
6
7# Or exclude specific namespaces
8webhook:
9 namespaceSelector:
10 matchExpressions:
11 - key: dynamo-validation
12 operator: NotIn
13 values: ["disabled"]

Note: For namespace-restricted operators, the namespace selector is automatically set to validate only the operator’s namespace. This configuration is ignored in namespace-restricted mode.


Certificate Management

Automatic Certificates (Default)

Zero configuration required! Certificates are automatically generated during helm install and helm upgrade.

How It Works

  1. Pre-install/pre-upgrade hook: Generates self-signed TLS certificates

    • Root CA (valid 10 years)
    • Server certificate (valid 10 years)
    • Stores in Secret: <release>-webhook-server-cert
  2. Post-install/post-upgrade hook: Injects CA bundle into ValidatingWebhookConfiguration

    • Reads ca.crt from Secret
    • Patches ValidatingWebhookConfiguration with base64-encoded CA bundle
  3. Operator pod: Mounts certificate secret and serves webhook on port 9443

Certificate Validity

  • Root CA: 10 years
  • Server Certificate: 10 years (same as Root CA)
  • Automatic rotation: Certificates are re-generated on every helm upgrade

Smart Certificate Generation

The certificate generation hook is intelligent:

  • Checks existing certificates before generating new ones
  • Skips generation if valid certificates exist (valid for 30+ days with correct SANs)
  • Regenerates only when needed (missing, expiring soon, or incorrect SANs)

This means:

  • Fast helm upgrade operations (no unnecessary cert generation)
  • Safe to run helm upgrade frequently
  • Certificates persist across reinstalls (stored in Secret)

Manual Certificate Rotation

If you need to rotate certificates manually:

$# Delete the certificate secret
$kubectl delete secret <release>-webhook-server-cert -n <namespace>
$
$# Upgrade the release to regenerate certificates
$helm upgrade <release> dynamo-platform -n <namespace>

cert-manager Integration

For clusters with cert-manager installed, you can enable automated certificate lifecycle management.

Prerequisites

  1. cert-manager installed (v1.0+)
  2. CA issuer configured (e.g., selfsigned-issuer)

Configuration

1dynamo-operator:
2 webhook:
3 certManager:
4 enabled: true
5 issuerRef:
6 kind: Issuer # Or ClusterIssuer
7 name: selfsigned-issuer # Your issuer name

How It Works

  1. Helm creates Certificate resource: Requests TLS certificate from cert-manager
  2. cert-manager generates certificate: Based on configured issuer
  3. cert-manager stores in Secret: <release>-webhook-server-cert
  4. cert-manager ca-injector: Automatically injects CA bundle into ValidatingWebhookConfiguration
  5. Operator pod: Mounts certificate secret and serves webhook

Benefits Over Automatic Mode

  • Automated rotation: cert-manager renews certificates before expiration
  • Custom validity periods: Configure certificate lifetime
  • CA rotation support: ca-injector handles CA updates automatically
  • Integration with existing PKI: Use your organization’s certificate infrastructure

Certificate Rotation

With cert-manager, certificate rotation is fully automated:

  1. Leaf certificate rotation (default: every year)

    • cert-manager auto-renews before expiration
    • controller-runtime auto-reloads new certificate
    • No pod restart required
    • No caBundle update required (same Root CA)
  2. Root CA rotation (every 10 years)

    • cert-manager rotates Root CA
    • ca-injector auto-updates caBundle in ValidatingWebhookConfiguration
    • No manual intervention required

Example: Self-Signed Issuer

1apiVersion: cert-manager.io/v1
2kind: Issuer
3metadata:
4 name: selfsigned-issuer
5 namespace: dynamo-system
6spec:
7 selfSigned: {}
8---
9# Enable in platform values.yaml
10dynamo-operator:
11 webhook:
12 certManager:
13 enabled: true
14 issuerRef:
15 kind: Issuer
16 name: selfsigned-issuer

External Certificates

Bring your own certificates for custom PKI requirements.

Steps

  1. Create certificate secret manually:
$kubectl create secret tls <release>-webhook-server-cert \
> --cert=tls.crt \
> --key=tls.key \
> -n <namespace>
$
$# Also add ca.crt to the secret
$kubectl patch secret <release>-webhook-server-cert -n <namespace> \
> --type='json' \
> -p='[{"op": "add", "path": "/data/ca.crt", "value": "'$(base64 -w0 < ca.crt)'"}]'
  1. Configure operator to use external secret:
1dynamo-operator:
2 webhook:
3 certificateSecret:
4 external: true
5 caBundle: <base64-encoded-ca-cert> # Must manually specify
  1. Deploy operator:
$helm install dynamo-platform . -n <namespace> -f values.yaml

Certificate Requirements

  • Secret name: Must match webhook.certificateSecret.name (default: webhook-server-cert)
  • Secret keys: tls.crt, tls.key, ca.crt
  • Certificate SAN: Must include <service-name>.<namespace>.svc
    • Example: dynamo-platform-dynamo-operator-webhook-service.dynamo-system.svc

Multi-Operator Deployments

The operator supports running both cluster-wide and namespace-restricted instances simultaneously using a lease-based coordination mechanism.

Scenario

Cluster:
├─ Operator A (cluster-wide, namespace: platform-system)
│ └─ Validates all namespaces EXCEPT team-a
└─ Operator B (namespace-restricted, namespace: team-a)
└─ Validates only team-a namespace

How It Works

  1. Namespace-restricted operator creates a Lease in its namespace
  2. Cluster-wide operator watches for Leases named dynamo-operator-ns-lock
  3. Cluster-wide operator skips validation for namespaces with active Leases
  4. Namespace-restricted operator validates resources in its namespace

Lease Configuration

The lease mechanism is automatically configured based on deployment mode:

1# Cluster-wide operator (default)
2namespaceRestriction:
3 enabled: false
4# → Watches for leases in all namespaces
5# → Skips validation for namespaces with active leases
6
7# Namespace-restricted operator
8namespaceRestriction:
9 enabled: true
10 namespace: team-a
11# → Creates lease in team-a namespace
12# → Does NOT check for leases (no cluster permissions)

Deployment Example

$# 1. Deploy cluster-wide operator
$helm install platform-operator dynamo-platform \
> -n platform-system \
> --set namespaceRestriction.enabled=false
$
$# 2. Deploy namespace-restricted operator for team-a
$helm install team-a-operator dynamo-platform \
> -n team-a \
> --set namespaceRestriction.enabled=true \
> --set namespaceRestriction.namespace=team-a

ValidatingWebhookConfiguration Naming

The webhook configuration name reflects the deployment mode:

  • Cluster-wide: <release>-validating
  • Namespace-restricted: <release>-validating-<namespace>

Example:

$# Cluster-wide
$platform-operator-validating
$
$# Namespace-restricted (team-a)
$team-a-operator-validating-team-a

This allows multiple webhook configurations to coexist without conflicts.

Lease Health

If the namespace-restricted operator is deleted or becomes unhealthy:

  • Lease expires after leaseDuration + gracePeriod (default: ~30 seconds)
  • Cluster-wide operator automatically resumes validation for that namespace

Troubleshooting

Webhook Not Called

Symptoms:

  • Invalid resources are accepted
  • No validation errors in logs

Checks:

  1. Verify webhook configuration exists:
$kubectl get validatingwebhookconfiguration | grep dynamo
  1. Check webhook configuration:
$kubectl get validatingwebhookconfiguration <name> -o yaml
$# Verify:
$# - caBundle is present and non-empty
$# - clientConfig.service points to correct service
$# - webhooks[].namespaceSelector matches your namespace
  1. Verify webhook service exists:
$kubectl get service -n <namespace> | grep webhook
  1. Check operator logs for webhook startup:
$kubectl logs -n <namespace> deployment/<release>-dynamo-operator | grep webhook
$# Should see: "Registering validation webhooks"
$# Should see: "Starting webhook server"

Connection Refused Errors

Symptoms:

Error from server (InternalError): Internal error occurred: failed calling webhook:
Post "https://...webhook-service...:443/validate-...": dial tcp ...:443: connect: connection refused

Checks:

  1. Verify operator pod is running:
$kubectl get pods -n <namespace> -l app.kubernetes.io/name=dynamo-operator
  1. Check webhook server is listening:
$# Port-forward to pod
$kubectl port-forward -n <namespace> pod/<operator-pod> 9443:9443
$
$# In another terminal, test connection
$curl -k https://localhost:9443/validate-nvidia-com-v1alpha1-dynamocomponentdeployment
$# Should NOT get "connection refused"
  1. Verify webhook port in deployment:
$kubectl get deployment -n <namespace> <release>-dynamo-operator -o yaml | grep -A5 "containerPort: 9443"
  1. Check for webhook initialization errors:
$kubectl logs -n <namespace> deployment/<release>-dynamo-operator | grep -i error

Certificate Errors

Symptoms:

Error from server (InternalError): Internal error occurred: failed calling webhook:
x509: certificate signed by unknown authority

Checks:

  1. Verify caBundle is present:
$kubectl get validatingwebhookconfiguration <name> -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | base64 -d
$# Should output a valid PEM certificate
  1. Verify certificate secret exists:
$kubectl get secret -n <namespace> <release>-webhook-server-cert
  1. Check certificate validity:
$kubectl get secret -n <namespace> <release>-webhook-server-cert -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -text
$# Check:
$# - Not expired
$# - SAN includes: <service-name>.<namespace>.svc
  1. Check CA injection job logs:
$kubectl logs -n <namespace> job/<release>-webhook-ca-inject-<revision>

Helm Hook Job Failures

Symptoms:

  • helm install or helm upgrade hangs or fails
  • Certificate generation errors

Checks:

  1. List hook jobs:
$kubectl get jobs -n <namespace> | grep webhook
  1. Check job logs:
$# Certificate generation
$kubectl logs -n <namespace> job/<release>-webhook-cert-gen-<revision>
$
$# CA injection
$kubectl logs -n <namespace> job/<release>-webhook-ca-inject-<revision>
  1. Check RBAC permissions:
$# Verify ServiceAccount exists
$kubectl get sa -n <namespace> <release>-webhook-ca-inject
$
$# Verify ClusterRole and ClusterRoleBinding exist
$kubectl get clusterrole <release>-webhook-ca-inject
$kubectl get clusterrolebinding <release>-webhook-ca-inject
  1. Manual cleanup:
$# Delete failed jobs
$kubectl delete job -n <namespace> <release>-webhook-cert-gen-<revision>
$kubectl delete job -n <namespace> <release>-webhook-ca-inject-<revision>
$
$# Retry helm upgrade
$helm upgrade <release> dynamo-platform -n <namespace>

Validation Errors Not Clear

Symptoms:

  • Webhook rejects resource but error message is unclear

Solution:

Check operator logs for detailed validation errors:

$kubectl logs -n <namespace> deployment/<release>-dynamo-operator | grep "validate create\|validate update"

Webhook logs include:

  • Resource name and namespace
  • Validation errors with context
  • Warnings for immutable field changes

Stuck Deleting Resources

Symptoms:

  • Resource stuck in “Terminating” state
  • Webhook blocks finalizer removal

Solution:

The webhook automatically skips validation for resources being deleted. If stuck:

  1. Check if webhook is blocking:
$kubectl describe <resource-type> <name> -n <namespace>
$# Look for events mentioning webhook errors
  1. Temporarily work around the webhook:
$# Option 1: Set failurePolicy to Ignore
$kubectl patch validatingwebhookconfiguration <name> \
> --type='json' \
> -p='[{"op": "replace", "path": "/webhooks/0/failurePolicy", "value": "Ignore"}]'
$
$# Option 2 (last resort): Delete ValidatingWebhookConfiguration
$kubectl delete validatingwebhookconfiguration <name>
  1. Delete resource again:
$kubectl delete <resource-type> <name> -n <namespace>
  1. Restore webhook configuration:
$helm upgrade <release> dynamo-platform -n <namespace>

Best Practices

Production Deployments

  1. Use failurePolicy: Fail (default) to ensure validation is enforced
  2. Monitor webhook latency - Validation adds ~10-50ms per resource operation
  3. Use cert-manager for automated certificate lifecycle in large deployments
  4. Test webhook configuration in staging before production

Development Deployments

  1. Use failurePolicy: Ignore if webhook availability is problematic during development
  2. Keep automatic certificates (simpler than cert-manager for dev)

Multi-Tenant Deployments

  1. Deploy one cluster-wide operator for platform-wide validation
  2. Deploy namespace-restricted operators for tenant-specific namespaces
  3. Monitor lease health to ensure coordination works correctly
  4. Use unique release names per namespace to avoid naming conflicts

Additional Resources


Support

For issues or questions:

  • Check Troubleshooting section
  • Review operator logs: kubectl logs -n <namespace> deployment/<release>-dynamo-operator
  • Open an issue on GitHub