Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error creating the plan "instance-id": "xxxxxxx", "error": "signal: killed" #1496

Open
Qumber-ali opened this issue Dec 27, 2024 · 0 comments

Comments

@Qumber-ali
Copy link

Qumber-ali commented Dec 27, 2024

Everytime when I am having all the setup completed with all the necessary resources to deploy a terraform module, tf-runner initializes the s3 backend successfully but upon creating the plan it prompts error stating signal killed.

my Terraform object file:

`apiVersion: infra.contrib.fluxcd.io/v1alpha2
kind: Terraform
metadata:
name: authentication
namespace: tf-controller
spec:
path: ./
interval: 1m
approvePlan: auto
backendConfig:
customConfiguration: |
backend "s3" {
region = "eu-west-1"
bucket = "xyz-terraform-state"
key = "bucket.tfstate"
acl = "bucket-owner-full-control"
workspace_key_prefix = "035422210734/eu-west-1/fabsb/green"
assume_role = {
role_arn = "arn:aws:iam::498436004313:role/callsign-terraform-deployer-nonprod"
}
}
runnerPodTemplate:
spec:
initContainers:
- name: init-set-permissions
image: busybox
command:
- sh
- '-c'
- >-
chown -R 65532:65532 /tmp/tf-controller-authentication && chmod -R
755 /tmp/tf-controller-authentication
volumeMounts:
- name: app-volume
mountPath: /tmp/tf-controller-authentication
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoExecute
key: role
value: web
image: '892909003891.dkr.ecr.eu-west-1.amazonaws.com/tf-runner:1.10.0'
env: null
volumeMounts:
- name: app-volume
mountPath: /tmp/tf-controller-authentication
- name: tf-vars
mountPath: /tmp/tf-controller-authentication/inputs.auto.tfvars.json
subPath: inputs.auto.tfvars.json
- name: private-key
mountPath: /home/runner/.ssh/id_rsa
subPath: id_rsa
- name: public-key
mountPath: /home/runner/.ssh/id_rsa.pub
subPath: id_rsa.pub
- name: known-hosts
mountPath: /home/runner/.ssh/known_hosts
subPath: known_hosts
volumes:
- name: app-volume
emptyDir: {}
- name: tf-vars
configMap:
name: terraform-vars
- name: private-key
secret:
secretName: tf-private-key
- name: public-key
secret:
secretName: tf-public-key
- name: known-hosts
secret:
secretName: known-hosts
sourceRef:
kind: GitRepository
name: authentication
namespace: tf-controller

`
my GitRepository object:

apiVersion: source.toolkit.fluxcd.io/v1 kind: GitRepository metadata: name: authentication namespace: tf-controller spec: interval: 30s ref: tag: v4.3.0 secretRef: name: tf-controller timeout: 60s url: https://github.com/terraform-aws-modules/terraform-aws-s3-bucket

The output I am getting in tf-runner logs:

image
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant