How do I resolve RAM limit issues with Azure Resource SKUs API calls in Spacelift runs?
Last updated: September 15, 2025
Context
When using Spacelift runners in Kubernetes to make Azure Resource SKUs API calls through the azapi provider, runs may fail due to memory pressure. This occurs because the API response can cause significant memory spikes, leading to pod eviction when the node's available memory drops below the threshold.
Answer
There are two key steps to resolve this issue:
Set resource requests and limits for the Spacelift worker pods using the workerpool configuration. This prevents the pods from being classified as BestEffort and becoming primary eviction candidates. You can configure this in your workerpool settings as documented in the Kubernetes Workers documentation.
Reduce the memory usage by filtering the Azure Resource SKUs API response. Add query parameters to your azapi_resource_action data source:
data "azapi_resource_action" "skus" { # ... other configuration ... query_parameters = { "$filter" = ["location eq 'westeurope'"] } # Adjust location as needed }
These changes will help prevent pod eviction by both managing resource allocation and reducing the memory footprint of the API response.
Sources: