-
-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to be used with terraform workspaces #122
Comments
Instead of setting module "terraform_state_backend" {
enabled = terraform.workspace == "default"
source = "cloudposse/tfstate-backend/aws"
version = "0.38.1"
# ...
} This way, the module will set the module "rest_of_my_config" {
# Avoid creating anything within if we're in the default workspace.
count = terraform.workspace == "default" ? 0 : 1
# ...
} |
The module creates an S3 backend to be used by Terraform. Usually this is done once per company/organization, so I do not understand why you are trying to create one per workspace. Whether or not you are using workspaces, it is your responsibility to ensure you are not creating duplicate resources by using different inputs for different instantiations of this module. This module provides several inputs you can use to vary the name of created resources:
|
Closing this out as it's a misunderstanding by the module consumer on how to use this module. @eloo if you're still struggling with how to use this module correctly following the above comments, please feel free to ask questions here, ping me, and I can try to help you understand the correct usage in your scenario. |
Describe the Bug
Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.
It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.
Using the workspace in the bucket and dynamo table name with case issues with the
backend.tf
because this will alter the whole time.Using
enabled = false
will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :DExpected Behavior
The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the
enabled
flag but justskip_creation_if_resources_exists
or so on.Steps to Reproduce
Steps to reproduce the behavior:
Additional Context
Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.
Thanks
The text was updated successfully, but these errors were encountered: