I am using a gitlab pipeline which uses terraform for deploying AWS resources. The pipeline configuration is pre-existing, hence i cannot change anything there. The pipeline run needs to generate some files using a shell script using data external and uses the generated files to upload to s3 using the resource aws_s3_object. If I run this locally on my machine then it works but does not work on the pipeline.
data "external" "proto_generator" {
program = [
"sh", "-c",
<<-EOT
echo "=== Starting Proto Generation ===" >&2
# Redirect all installation output to stderr
apk add --no-cache bash jq protobuf protobuf-dev >&2
# Show current directory and contents
pwd >&2
ls -la ${path.root}/libraries/proto_files >&2
# Execute proto generator script and redirect its output to stderr
bash ${path.root}/scripts/proto_generator.sh >&2
# List generated files for debugging
echo "Generated files:" >&2
ls -la ${path.root}/libraries/proto_files/*/*_descriptor.desc >&2
# Only output the JSON object, nothing else
printf '{"result":"success"}'
EOT
]
}
I use the aws_s3_object
resource to upload to a S3 bucket as follows:
resource "aws_s3_object" "descriptor_files" {
# Use for_each to handle multiple files
for_each = fileset("${path.root}/libraries/proto_files", "**/*_descriptor.desc")
bucket = aws_s3_bucket.data_model_bucket.id
key = each.value
source = "${path.root}/libraries/proto_files/${each.value}"
# Ensure this runs after the descriptor files are generated
depends_on = [data.external.proto_generator]
# Optional: Add content type and etag for caching
content_type = "application/octet-stream"
etag = filemd5("${path.root}/libraries/proto_files/${each.value}")
}
The files should get generated in the plan phase and then get uploaded din the apply phase. I have put some print statement in the data external block but nothing gets printed as of now. The pipeline has no issues but no file upload happens nor can i see any logs in the plan or apply phase. What am i doing wrong ?
I am using a gitlab pipeline which uses terraform for deploying AWS resources. The pipeline configuration is pre-existing, hence i cannot change anything there. The pipeline run needs to generate some files using a shell script using data external and uses the generated files to upload to s3 using the resource aws_s3_object. If I run this locally on my machine then it works but does not work on the pipeline.
data "external" "proto_generator" {
program = [
"sh", "-c",
<<-EOT
echo "=== Starting Proto Generation ===" >&2
# Redirect all installation output to stderr
apk add --no-cache bash jq protobuf protobuf-dev >&2
# Show current directory and contents
pwd >&2
ls -la ${path.root}/libraries/proto_files >&2
# Execute proto generator script and redirect its output to stderr
bash ${path.root}/scripts/proto_generator.sh >&2
# List generated files for debugging
echo "Generated files:" >&2
ls -la ${path.root}/libraries/proto_files/*/*_descriptor.desc >&2
# Only output the JSON object, nothing else
printf '{"result":"success"}'
EOT
]
}
I use the aws_s3_object
resource to upload to a S3 bucket as follows:
resource "aws_s3_object" "descriptor_files" {
# Use for_each to handle multiple files
for_each = fileset("${path.root}/libraries/proto_files", "**/*_descriptor.desc")
bucket = aws_s3_bucket.data_model_bucket.id
key = each.value
source = "${path.root}/libraries/proto_files/${each.value}"
# Ensure this runs after the descriptor files are generated
depends_on = [data.external.proto_generator]
# Optional: Add content type and etag for caching
content_type = "application/octet-stream"
etag = filemd5("${path.root}/libraries/proto_files/${each.value}")
}
The files should get generated in the plan phase and then get uploaded din the apply phase. I have put some print statement in the data external block but nothing gets printed as of now. The pipeline has no issues but no file upload happens nor can i see any logs in the plan or apply phase. What am i doing wrong ?
Share Improve this question asked Mar 30 at 13:31 dsmdsm 655 bronze badges 3 |1 Answer
Reset to default 1To demonstrate what I meant the artifacts
keyword will instruct GitLab to save the listed files/directories for the next steps, for example:
stages:
- plan
- apply
image: hashicorp/terraform
plan:
stage: plan
script:
- terraform init
- terraform plan -out=tfplan
artifacts:
paths:
- libraries/proto_files/**/*_descriptor.desc
- tfplan
# Here we get the artifacts that were saved in the plan step
apply:
stage: apply
script:
- terraform init
- terraform apply -auto-approve tfplan
artifacts
keyword to save the files and the plan output? If yes, then I'm not sure what's wrong, but try adding another line in thescript
block of the step that runsterraform apply
(like thels -la
prints you did, but outside the Terraform code) to see if the files are available to that step – towel Commented Mar 30 at 18:17terraform plan
step you add anartifacts:
block and put apaths:
list inside it with all the outputs you wish to transfer to theterraform apply
step. GitLab will then make the artifacts available to any step that comes after the plan step. – towel Commented Mar 31 at 19:56