Can I share files between Terraform plan and apply stages running in different containers?

Last updated: September 8, 2025

Context

When using Terraform in a containerized environment, you may need to generate files during the `plan` stage and access them during the `apply` stage, even though these stages run in different containers. A common use case is using the `archive_file` data source to package Lambda function code without storing zip files in the Git repository.

Answer

Yes, you can share files between Terraform `plan` and `apply` stages by using a shared storage solution accessible to both containers. Here's how to implement this:

  1. Set up a shared storage location (such as Amazon S3 or a persistent volume) that both containers can access.

  2. During the `plan` stage:

    • Use the `archive_file` data source to create your zip files

    • Upload the generated files to your shared storage location

  3. During the `apply` stage:

    • Configure your Terraform configuration to retrieve the files from the shared storage

    • Reference these files in your resource configurations

Example workflow using S3:

```hcl # Generate zip file during plan data "archive_file" "lambda_function" { type = "zip" source_dir = "${path.module}/lambda" output_path = "lambda_function.zip" } # Upload to S3 resource "aws_s3_object" "lambda_package" { bucket = "your-shared-bucket" key = "lambda_function.zip" source = data.archive_file.lambda_function.output_path } # Reference in Lambda resource resource "aws_lambda_function" "function" { s3_bucket = aws_s3_object.lambda_package.bucket s3_key = aws_s3_object.lambda_package.key # ... other configuration } ```

This approach allows you to maintain clean Git repositories while still effectively managing your Lambda function code across different container stages.