Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update actions/github-script action to v7 #59

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Nov 13, 2023

This PR contains the following updates:

Package Type Update Change
actions/github-script action major v6 -> v7

Release Notes

actions/github-script (actions/github-script)

v7

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

Terraform Format and Style 🖌``

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan

terraform
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/jameswcurtin/do-k8s-cluster/runs/run-oFn4ACfiKHTWsspy

Waiting for the plan to start...

Terraform v1.1.8
on linux_amd64
Initializing plugins and modules...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.this will be created
  + resource "digitalocean_kubernetes_cluster" "this" {
      + auto_upgrade   = true
      + cluster_subnet = (known after apply)
      + created_at     = (known after apply)
      + endpoint       = (known after apply)
      + ha             = false
      + id             = (known after apply)
      + ipv4_address   = (known after apply)
      + kube_config    = (sensitive value)
      + name           = (known after apply)
      + region         = "nyc3"
      + service_subnet = (known after apply)
      + status         = (known after apply)
      + surge_upgrade  = true
      + updated_at     = (known after apply)
      + urn            = (known after apply)
      + version        = "1.28.2-do.0"
      + vpc_uuid       = (known after apply)

      + maintenance_policy {
          + day        = "friday"
          + duration   = (known after apply)
          + start_time = "03:00"
        }

      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = true
          + id                = (known after apply)
          + max_nodes         = 2
          + min_nodes         = 1
          + name              = "worker-pool"
          + nodes             = (known after apply)
          + size              = "s-1vcpu-2gb"
        }
    }

  # digitalocean_loadbalancer.this will be created
  + resource "digitalocean_loadbalancer" "this" {
      + algorithm                        = "round_robin"
      + disable_lets_encrypt_dns_records = false
      + droplet_ids                      = (known after apply)
      + enable_backend_keepalive         = false
      + enable_proxy_protocol            = true
      + http_idle_timeout_seconds        = (known after apply)
      + id                               = (known after apply)
      + ip                               = (known after apply)
      + name                             = (known after apply)
      + project_id                       = (known after apply)
      + redirect_http_to_https           = false
      + region                           = "nyc3"
      + size_unit                        = (known after apply)
      + status                           = (known after apply)
      + urn                              = (known after apply)
      + vpc_uuid                         = (known after apply)

      + forwarding_rule {
          + certificate_id   = (known after apply)
          + certificate_name = (known after apply)
          + entry_port       = 80
          + entry_protocol   = "http"
          + target_port      = 80
          + target_protocol  = "http"
          + tls_passthrough  = false
        }
    }

  # digitalocean_record.loadbalancer_subdomain will be created
  + resource "digitalocean_record" "loadbalancer_subdomain" {
      + domain = (sensitive value)
      + fqdn   = (known after apply)
      + id     = (known after apply)
      + name   = "kube"
      + ttl    = 60
      + type   = "A"
      + value  = (known after apply)
    }

  # module.cert_automation.helm_release.cert_manager will be created
  + resource "helm_release" "cert_manager" {
      + atomic                     = false
      + chart                      = "cert-manager"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cert-manager"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.jetstack.io"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://artifacthub.io/packages/helm/cert-manager/cert-manager
                resources:
                  requests:
                    cpu: 10m
                    memory: 32Mi
                cainjector:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                startupapicheck:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
                webhook:
                  resources:
                    requests:
                      cpu: 10m
                      memory: 32Mi
            EOT,
        ]
      + verify                     = false
      + version                    = "v1.13.2"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "createCustomResource"
          + value = "true"
        }
      + set {
          + name  = "installCRDs"
          + value = "true"
        }
    }

  # module.cert_automation.helm_release.cluster_issuer will be created
  + resource "helm_release" "cluster_issuer" {
      + atomic                     = false
      + chart                      = "modules/cert-automation/charts/cert-automation"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "cluster-issuer"
      + namespace                  = "cert-manager"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.helm_release.external_dns will be created
  + resource "helm_release" "external_dns" {
      + atomic                     = false
      + chart                      = "external-dns"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "external-dns"
      + namespace                  = "external-dns"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/bitnami/charts/tree/master/bitnami/external-dns
                digitalocean:
                  secretName: "digital-ocean-token"
                interval: "15s"
                provider: "digitalocean"
                policy: "sync"
                txtPrefix: "xdns-"
                resources:
                  requests:
                    memory: "64Mi"
                    cpu: "100m"
            EOT,
        ]
      + verify                     = false
      + version                    = "6.28.4"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.external_dns.kubernetes_namespace.external_dns will be created
  + resource "kubernetes_namespace" "external_dns" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.external_dns.kubernetes_secret.digital_ocean_token will be created
  + resource "kubernetes_secret" "digital_ocean_token" {
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "sensitive" = "true"
            }
          + name             = "digital-ocean-token"
          + namespace        = "external-dns"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.ingress_controller.helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                # See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
                controller:
                  config:
                    use-proxy-protocol: true
                  service:
                    annotations:
                      "service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol": "true"
                    externalTrafficPolicy: "Cluster"
                    type: "LoadBalancer"
            EOT,
        ]
      + verify                     = false
      + version                    = "4.8.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # module.ntfy.helm_release.nfty will be created
  + resource "helm_release" "nfty" {
      + atomic                     = false
      + chart                      = "modules/ntfy/charts/ntfy"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = true
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 3
      + metadata                   = (known after apply)
      + name                       = "ntfy"
      + namespace                  = "ntfy"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "0.0.1"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # random_id.cluster_id will be created
  + resource "random_id" "cluster_id" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 4
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 11 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_name = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @renovate[bot], Action: pull_request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants