How does autoscaling work for private Kubernetes Spacelift Workers?
Last updated: September 15, 2025
When you set up autoscaling for private Spacelift Workers using KEDA (Kubernetes Event Driven Autoscaler), the scaling logic is handled by KEDA itself, not directly by Spacelift.
How the scaling mechanism works
Here's how the autoscaling process functions:
Spacelift publishes metrics: Spacelift publishes worker pool queue metrics that indicate the current workload demand
KEDA monitors metrics: KEDA continuously monitors these metrics from Spacelift
KEDA makes scaling decisions: Based on the queue metrics, KEDA's algorithm determines how many worker replicas should be running
Automatic scaling: KEDA automatically scales the number of worker pods up or down based on demand
Why you don't see explicit scaling conditions
You won't find an explicit "pending runs > available runners" condition in your KEDA configuration because the scaling happens implicitly through KEDA's built-in algorithm. The algorithm uses the worker pool queue metrics published by Spacelift to make intelligent scaling decisions without requiring you to define specific threshold conditions.
This approach provides more sophisticated scaling behavior than simple threshold-based rules, as KEDA can consider factors like metric trends and scaling velocity to make optimal decisions.
For detailed setup instructions and examples, you can refer to our autoscaling guide.