Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snowflake Connection Auth fails when specifying Warehouse #1099

Closed
El-Carverino opened this issue Jul 6, 2022 · 7 comments
Closed

Snowflake Connection Auth fails when specifying Warehouse #1099

El-Carverino opened this issue Jul 6, 2022 · 7 comments
Labels
bug Used to mark issues with provider's incorrect behavior category:provider_config

Comments

@El-Carverino
Copy link

El-Carverino commented Jul 6, 2022

Provider Version

Snowflake Provider v0.37.0 (and v0.37.1)
Note: This is/was not an issue with v0.36.0 (or earlier).

Terraform Version

Terraform v1.0.9 (but also experienced with later versions).

Describe the bug

After upgrading to v0.37.x, when running terraform plan (or apply), on an existing config with no changes, it successfully refreshes the state of all existing resources; however upon inspecting the current Snowflake environment, it returns this same error (in some form) for every existing resource (and the command fails):

390201 (08004): The requested warehouse does not exist or not authorized.

Expected behavior

When running terraform plan (or apply), on an existing config with no changes, after successfully refreshing the state of all existing resources, it should result in the following success message:

No changes. Your infrastructure matches the configuration.

Code samples and commands

terraform init -input=false
terraform plan -out="${TF_WORKSPACE}-tfplan"

Additional context

The Snowflake provider in the Terraform config is configured only with a role attribute. All other connection properties come from ENV variables (which have been confirmed to be accurate):

  • SNOWFLAKE_ACCOUNT
  • SNOWFLAKE_REGION
  • SNOWFLAKE_USER
  • SNOWFLAKE_WAREHOUSE

Note: In the provider documentation (and code) in this release, it looks like there is new handling for a warehouse attribute and/or use of the SNOWFLAKE_WAREHOUSE ENV variable. However, I have been successfully using this ENV var for a long time, with the expected result, as I think the underlying Snowflake connector checks for it. (It's definitely utilized, because the intended user does not have a default warehouse defined.) Perhaps there is a conflict with the updated provider configuration/code?

I also attempted removing the SNOWFLAKE_WAREHOUSE ENV variable and using the new provider warehouse attribute instead, but that resulted in the same error(s).

@El-Carverino El-Carverino added the bug Used to mark issues with provider's incorrect behavior label Jul 6, 2022
@El-Carverino
Copy link
Author

This is still occurring with v0.40.0.

@aleenprd
Copy link

still occurring in 2024

@sfc-gh-asawicki
Copy link
Collaborator

Hey @aleenprd, we will soon rework the provider config as part of https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/ROADMAP.md#providers-configuration-rework. We will then address this issue.

@aleenprd are you using the newest version of the provider (v0.95.0)?

@aleenprd
Copy link

0.94.1 was stable for me but started randomly acting out yesterday. I run Opentofu with this (version 1.8.1) locally, in gitlab runner and over kubernetes and the error appeared to be inconsistent between environments. I had to grant warehouse usage to useradmin and security admin which is not actually needed and worked just fine before yesterday. Super weird. I also noticed that the setup eventually degrades if you keep .terraform/ and lock.hcl in the project and that it is better to run init all the time.

@sfc-gh-asawicki
Copy link
Collaborator

If it started acting out randomly, then it may be an issue with Snowflake rather than with the provider itself. Could you provide the config you are using and the logs (running with the TF_LOG=DEBUG environment variable)?

Also, please keep in mind that currently, we are not supporting OpenTofu (which should work out-of-the-box but we are not testing the provider against it).

@aleenprd
Copy link

I doubt it. I tried manually provisioning the same resources in Snowflake with those roles. They don't even need a warehouse in the first place. I will maybe revert my fix and give you some logs if it appears again. Thanks

@sfc-gh-dszmolka
Copy link
Collaborator

Closing this out now as it's been for a while without activity. Please if you still encounter the issue even with the v1 versions, open a new issue. Thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Used to mark issues with provider's incorrect behavior category:provider_config
Projects
None yet
Development

No branches or pull requests

5 participants