Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Terraform digitalocean to v2.44.1 #55

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Mar 22, 2023

This PR contains the following updates:

Package Type Update Change
digitalocean (source) required_provider minor 2.26.0 -> 2.44.1

Release Notes

digitalocean/terraform-provider-digitalocean (digitalocean)

v2.44.1

Compare Source

BUG FIXES:

MISC:

v2.44.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.43.0

Compare Source

IMPROVEMENTS:

MISC:

v2.42.0

Compare Source

IMPROVEMENTS:

MISC:

v2.41.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.40.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.39.2

Compare Source

MISC:

v2.39.1

Compare Source

BUG FIXES:

v2.39.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.38.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.37.1

Compare Source

BUG FIXES:

v2.37.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.36.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.35.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.34.1

Compare Source

BUG FIXES:

v2.34.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

v2.33.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

v2.32.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.31.0

Compare Source

FEATURES:

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.30.0

Compare Source

FEATURES:

IMPROVEMENTS:

  • provider: Enable retries for requests that fail with a 429 or 500-level error by default (#​1016). - @​danaelhe

BUG FIXES:

MISC:

v2.29.0

Compare Source

FEATURES:

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.28.1

Compare Source

BUG FIXES:

v2.28.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

MISC:

v2.27.1

Compare Source

BUG FIXES:

MISC:

v2.27.0

Compare Source

IMPROVEMENTS:

BUG FIXES:

  • digitalocean_custom_image: use correct pending statuses for custom images (#​931). - @​rsmitty

DOCS:

MISC:


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-WbKZeaHSPMJno9Mw

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Warning: This plan was generated using a different version of Terraform, the
diff presented here maybe missing representations of recent features.

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade   = true
      + cluster_subnet = (known after apply)
      + created_at     = (known after apply)
      + endpoint       = (known after apply)
      + ha             = false
      + id             = (known after apply)
      + ipv4_address   = (known after apply)
      + kube_config    = (sensitive value)
      + name           = (known after apply)
      + region         = "nyc3"
      + service_subnet = (known after apply)
      + status         = (known after apply)
      + surge_upgrade  = true
      + updated_at     = (known after apply)
      + urn            = (known after apply)
      + version        = "1.26.3-do.0"
      + vpc_uuid       = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.11.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.14.4"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.5.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.27.0 Update Terraform digitalocean to v2.27.1 Mar 22, 2023
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 61ce178 to 58073de Compare March 22, 2023 20:13
@github-actions
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-DFVE9c3scWohropP

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Warning: This plan was generated using a different version of Terraform, the
diff presented here maybe missing representations of recent features.

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade   = true
      + cluster_subnet = (known after apply)
      + created_at     = (known after apply)
      + endpoint       = (known after apply)
      + ha             = false
      + id             = (known after apply)
      + ipv4_address   = (known after apply)
      + kube_config    = (sensitive value)
      + name           = (known after apply)
      + region         = "nyc3"
      + service_subnet = (known after apply)
      + status         = (known after apply)
      + surge_upgrade  = true
      + updated_at     = (known after apply)
      + urn            = (known after apply)
      + version        = "1.26.3-do.0"
      + vpc_uuid       = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.11.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.14.4"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.5.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.27.1 Update Terraform digitalocean to v2.28.1 Jun 1, 2023
@github-actions
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-wi8bGLz2mYdVymw1

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade         = true
      + cluster_subnet       = (known after apply)
      + created_at           = (known after apply)
      + endpoint             = (known after apply)
      + ha                   = false
      + id                   = (known after apply)
      + ipv4_address         = (known after apply)
      + kube_config          = (sensitive value)
      + name                 = (known after apply)
      + region               = "nyc3"
      + registry_integration = false
      + service_subnet       = (known after apply)
      + status               = (known after apply)
      + surge_upgrade        = true
      + updated_at           = (known after apply)
      + urn                  = (known after apply)
      + version              = "1.27.2-do.0"
      + vpc_uuid             = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.12.2"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.20.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.7.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.28.1 Update Terraform digitalocean to v2.29.0 Jul 17, 2023
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 86b5f2c to 5c0f248 Compare July 17, 2023 20:40
@github-actions
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-CvEk7VeWbnN3usFr

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.27.2-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.12.2"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.20.4"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.7.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.29.0 Update Terraform digitalocean to v2.30.0 Sep 11, 2023
@github-actions
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-P82b6Cs6UtFSzAk5

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.27.4-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.12.4"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.25.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.7.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.30.0 Update Terraform digitalocean to v2.31.0 Oct 23, 2023
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 3bea26b to b200912 Compare October 23, 2023 21:31
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from b200912 to c577c26 Compare November 8, 2023 17:15
@renovate renovate bot changed the title Update Terraform digitalocean to v2.31.0 Update Terraform digitalocean to v2.32.0 Nov 8, 2023
Copy link

github-actions bot commented Nov 8, 2023

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-P9pi9oYx53P1f5X6

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.28.2-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.13.2"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.28.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.8.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.32.0 Update Terraform digitalocean to v2.33.0 Dec 12, 2023
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from c577c26 to 8478e7a Compare December 12, 2023 15:56
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-rYjWtw59UdF4wB77

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.28.2-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.13.3"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.28.6"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.8.4"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 8478e7a to d2a44e7 Compare December 12, 2023 18:28
@renovate renovate bot changed the title Update Terraform digitalocean to v2.33.0 Update Terraform digitalocean to v2.34.0 Dec 12, 2023
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-KvpGwLoien2Hsvtp

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.28.2-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.13.3"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.28.6"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.8.4"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from d2a44e7 to 20f36b3 Compare December 21, 2023 01:55
@renovate renovate bot changed the title Update Terraform digitalocean to v2.34.0 Update Terraform digitalocean to v2.34.1 Dec 21, 2023
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-ETztPZbUeSvsYRm1

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.28.2-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.13.3"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.28.6"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.8.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 20f36b3 to cee66c0 Compare March 11, 2024 00:49
@renovate renovate bot changed the title Update Terraform digitalocean to v2.34.1 Update Terraform digitalocean to v2.35.0 Mar 11, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-q1tM4o1ybQGuhf2B

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.29.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.14.4"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.36.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.10.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from cee66c0 to 38dd722 Compare March 12, 2024 15:14
@renovate renovate bot changed the title Update Terraform digitalocean to v2.37.1 Update Terraform digitalocean to v2.38.0 May 3, 2024
Copy link

github-actions bot commented May 3, 2024

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-omCeb9sZMe2PZTxf

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.29.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.14.5"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "7.2.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.10.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 667b253 to d7b5bec Compare May 28, 2024 21:16
@renovate renovate bot changed the title Update Terraform digitalocean to v2.38.0 Update Terraform digitalocean to v2.39.0 May 28, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-7s3bsR5Csr87Vr7o

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.30.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.14.5"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "7.5.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.10.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from d7b5bec to 989a8c2 Compare May 31, 2024 18:57
@renovate renovate bot changed the title Update Terraform digitalocean to v2.39.0 Update Terraform digitalocean to v2.39.1 May 31, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-95KSrwZCCber4FCQ

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.30.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.14.5"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "7.5.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.10.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.39.1 Update Terraform digitalocean to v2.39.2 Jun 4, 2024
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 989a8c2 to d2c4a59 Compare June 4, 2024 15:54
Copy link

github-actions bot commented Jun 4, 2024

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-ByWXnpkb8B3N9cUG

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.30.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.14.5"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "7.5.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.10.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from d2c4a59 to 32362df Compare August 1, 2024 21:13
@renovate renovate bot changed the title Update Terraform digitalocean to v2.39.2 Update Terraform digitalocean to v2.40.0 Aug 1, 2024
Copy link

github-actions bot commented Aug 1, 2024

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-SLee4hqqScpAb34P

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.30.2-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + domains (known after apply)

      + firewall (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }

      + glb_settings (known after apply)

      + healthcheck (known after apply)

      + sticky_sessions (known after apply)
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.15.2"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "8.3.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.11.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.40.0 Update Terraform digitalocean to v2.41.0 Sep 13, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-WKXSYs1FN61a2BrS

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.31.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + domains (known after apply)

      + firewall (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }

      + glb_settings (known after apply)

      + healthcheck (known after apply)

      + sticky_sessions (known after apply)
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.15.3"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "8.3.7"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.11.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.41.0 Update Terraform digitalocean to v2.42.0 Sep 25, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-jMdaGVQd88qLcZJ2

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.31.1-do.0"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + domains (known after apply)

      + firewall (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }

      + glb_settings (known after apply)

      + healthcheck (known after apply)

      + sticky_sessions (known after apply)
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.15.3"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "8.3.8"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.11.2"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot changed the title Update Terraform digitalocean to v2.42.0 Update Terraform digitalocean to v2.43.0 Oct 18, 2024
@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 7b6bdce to 3dc6bcd Compare October 18, 2024 19:19
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-WtYNaHtGf6bqFgVE

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.31.1-do.3"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + domains (known after apply)

      + firewall (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }

      + glb_settings (known after apply)

      + healthcheck (known after apply)

      + sticky_sessions (known after apply)
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.16.1"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "8.3.9"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from 3dc6bcd to d662f0c Compare November 18, 2024 22:34
@renovate renovate bot changed the title Update Terraform digitalocean to v2.43.0 Update Terraform digitalocean to v2.44.0 Nov 18, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-xy4G9MMZmXoc5BG5

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade                     = true
      + cluster_subnet                   = (known after apply)
      + created_at                       = (known after apply)
      + destroy_all_associated_resources = false
      + endpoint                         = (known after apply)
      + ha                               = false
      + id                               = (known after apply)
      + ipv4_address                     = (known after apply)
      + kube_config                      = (sensitive value)
      + name                             = (known after apply)
      + region                           = "nyc3"
      + registry_integration             = false
      + service_subnet                   = (known after apply)
      + status                           = (known after apply)
      + surge_upgrade                    = true
      + updated_at                       = (known after apply)
      + urn                              = (known after apply)
      + version                          = "1.31.1-do.4"
      + vpc_uuid                         = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + target_load_balancer_ids         = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + domains (known after apply)

      + firewall (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }

      + glb_settings (known after apply)

      + healthcheck (known after apply)

      + sticky_sessions (known after apply)
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.16.1"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
            # (1 unchanged attribute hidden)
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "8.5.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/digitalocean-2.x branch from d662f0c to 418b291 Compare November 22, 2024 20:51
@renovate renovate bot changed the title Update Terraform digitalocean to v2.44.0 Update Terraform digitalocean to v2.44.1 Nov 22, 2024
Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖failure

Show Plan

terraform
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-gf7w2LUfURNqgYS2

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...
╷
│ Error: Plugin did not respond
│
│   with module.cert_automation.helm_release.cluster_issuer,
│   on modules/cert-automation/main.tf line 35, in resource "helm_release" "cluster_issuer":
│   35: resource "helm_release" "cluster_issuer" {
│
│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).PlanResourceChange call. The plugin logs may contain
│ more details.
╵
╷
│ Error: Plugin did not respond
│
│   with module.external_dns.helm_release.external_dns,
│   on modules/external-dns/main.tf line 37, in resource "helm_release" "external_dns":
│   37: resource "helm_release" "external_dns" {
│
│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).PlanResourceChange call. The plugin logs may contain
│ more details.
╵

Stack trace from the terraform-provider-helm_v2.8.0_x5 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x19e942b]

goroutine 51 [running]:
helm.sh/helm/v3/pkg/registry.(*Client).Tags(0x0, {0xc00322b786?, 0xc00070a410?})
	helm.sh/helm/[email protected]/pkg/registry/client.go:602 +0x12b
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).getOciURI(0xc00070aad0, {0xc00322b780, 0x3b}, {0x0, 0x0}, 0xc003235560)
	helm.sh/helm/[email protected]/pkg/downloader/chart_downloader.go:154 +0x129
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).ResolveChartVersion(0xc00070aad0, {0xc00322b780, 0x3b}, {0x0, 0x0})
	helm.sh/helm/[email protected]/pkg/downloader/chart_downloader.go:199 +0x12ff
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).DownloadTo(0xc00070aad0, {0xc00322b780, 0x3b}, {0x0?, 0x0?}, {0xc000065f80, 0x5f})
	helm.sh/helm/[email protected]/pkg/downloader/chart_downloader.go:90 +0x5b
helm.sh/helm/v3/pkg/action.(*ChartPathOptions).LocateChart(0xc00070ae88, {0xc00011c8e0, 0xc}, 0xc0000d87e0)
	helm.sh/helm/[email protected]/pkg/action/install.go:753 +0xdc5
github.com/hashicorp/terraform-provider-helm/helm.getChart({0x2338180?, 0xc0002b3dc0?}, 0x6?, {0xc00011c8e0?, 0xc0005ca640?}, 0x202aea1?)
	github.com/hashicorp/terraform-provider-helm/helm/resource_release.go:1080 +0xf5
github.com/hashicorp/terraform-provider-helm/helm.resourceDiff({0x2353348?, 0xc000af6150?}, 0x2022baf?, {0x1eb08e0?, 0xc0005bb180})
	github.com/hashicorp/terraform-provider-helm/helm/resource_release.go:780 +0x2b6
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.schemaMap.Diff(0xc000207530, {0x2353348, 0xc000af6150}, 0xc000af08f0, 0xc000af7e90, 0x212f918, {0x1eb08e0, 0xc0005bb180}, 0x0)
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/schema.go:699 +0x4b4
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).SimpleDiff(0x2354c38?, {0x2353348?, 0xc000af6150?}, 0xc000af08f0, 0x1d36c40?, {0x1eb08e0?, 0xc0005bb180?})
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:890 +0x6c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).PlanResourceChange(0xc000612bb8, {0x2353348?, 0xc000af6030?}, 0xc0001a62d0)
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:741 +0x98c
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).PlanResourceChange(0xc00014bf40, {0x2353348?, 0xc000f502a0?}, 0xc0004f60e0)
	github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:783 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_PlanResourceChange_Handler({0x1f377c0?, 0xc00014bf40}, {0x2353348, 0xc000f502a0}, 0xc0004f6000, 0x0)
	github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:367 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00066e3c0, {0x2359e40, 0xc000103d40}, 0xc000aea000, 0xc000215950, 0x32e2588, 0x0)
	google.golang.org/[email protected]/server.go:1295 +0xb0b
google.golang.org/grpc.(*Server).handleStream(0xc00066e3c0, {0x2359e40, 0xc000103d40}, 0xc000aea000, 0x0)
	google.golang.org/[email protected]/server.go:1636 +0xa1b
google.golang.org/grpc.(*Server).serveStreams.func1.2()
	google.golang.org/[email protected]/server.go:932 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
	google.golang.org/[email protected]/server.go:930 +0x28a

Error: The terraform-provider-helm_v2.8.0_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Operation failed: failed running terraform plan (exit 1)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants