Resolving WorkerPool Controller Panic Errors
Last updated: December 19, 2025
If you're experiencing panic errors with your Spacelift WorkerPool controller, this can caused by outdated Custom Resource Definitions (CRDs) that are incompatible with your controller version.
Symptoms
You may see error messages like:
runtime error: invalid memory address or nil pointer dereferenceunknown field "status.desiredPoolSize"unknown field "status.selector"Controller pods stuck in leader election
Root Cause
This issue occurs when the WorkerPool CRDs in your cluster don't match the version expected by your controller. The controller tries to set fields like status.desiredPoolSize that don't exist in the outdated CRD schema, causing a panic.
If you have made no changes, an external event may have also triggered a restart and surfaced the issue.
Solution
Step 1: Identify Your Controller Version
Check your controller logs to find the version number:
kubectl -n spacelift-workerpool-controller logs deploy/spacelift-workerpool-controller-controller-managerLook for a line like: "msg":"Starting manager","version":"0.0.21"
Step 2: Update the CRDs
Apply the correct CRDs for your controller version from the Spacelift Helm charts repository. For example, if you're using controller version 0.0.21:
Download the CRDs from the appropriate commit in the Spacelift Helm charts repository
Apply them to your cluster:
kubectl apply -f workerpools.workers.spacelift.io.yaml kubectl apply -f workers.workers.spacelift.io.yaml
Step 3: Clean Up and Restart
If the controller is still failing after updating CRDs:
Scale down the controller deployment:
kubectl -n spacelift-workerpool-controller scale deployment spacelift-workerpool-controller-controller-manager --replicas=0Delete any existing Worker resources (remove finalizers if needed):
kubectl -n spacelift-workerpool-controller get workers kubectl -n spacelift-workerpool-controller delete worker <worker-name>Delete the WorkerPool resource:
kubectl -n spacelift-workerpool-controller delete workerpool <workerpool-name>Scale the controller back up:
kubectl -n spacelift-workerpool-controller scale deployment spacelift-workerpool-controller-controller-manager --replicas=2Recreate your WorkerPool resource
Prevention
To avoid this issue in the future:
Always update CRDs manually when upgrading the WorkerPool controller, as documented in our setup guide
Consider using
--skip-crdsflag with Helm and managing CRDs separatelyThis is a known Helm limitation where CRDs are not automatically updated during chart upgrades