Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependency flake8 to v7 #82

Merged
merged 1 commit into from
Jan 5, 2024
Merged

Update dependency flake8 to v7 #82

merged 1 commit into from
Jan 5, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jan 5, 2024

Mend Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
flake8 (changelog) ==6.1.0 -> ==7.0.0 age adoption passing confidence

Release Notes

pycqa/flake8 (flake8)

v7.0.0

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

Copy link

github-actions bot commented Jan 5, 2024

Terraform Initialization success

Terraform Plan success

Pusher: renovate[bot], Action: pull_request

Show Plan
terraform
data.sakuracloud_archive.ubuntu_archive: Reading...
sakuracloud_ssh_key_gen.gen_key: Refreshing state... [id=113502124990]
sakuracloud_switch.k8s_internal_switch: Refreshing state... [id=113502124986]
sakuracloud_disk.k8s_rook_disk[1]: Refreshing state... [id=113502124983]
sakuracloud_internet.k8s_external_switch: Refreshing state... [id=113502124980]
sakuracloud_disk.k8s_rook_disk[0]: Refreshing state... [id=113502124988]
sakuracloud_disk.k8s_rook_disk[5]: Refreshing state... [id=113502124981]
sakuracloud_disk.k8s_rook_disk[6]: Refreshing state... [id=113502125010]
sakuracloud_disk.k8s_rook_disk[2]: Refreshing state... [id=113502124987]
sakuracloud_disk.k8s_rook_disk[3]: Refreshing state... [id=113502124985]
sakuracloud_disk.k8s_rook_disk[4]: Refreshing state... [id=113502125008]
sakuracloud_disk.k8s_rook_disk[7]: Refreshing state... [id=113502125011]
data.sakuracloud_archive.ubuntu_archive: Read complete after 2s [id=113402076881]
sakuracloud_disk.k8s_router_disk[0]: Refreshing state... [id=113502124982]
sakuracloud_disk.k8s_control_plane_disk[0]: Refreshing state... [id=113502125012]
sakuracloud_disk.k8s_control_plane_disk[2]: Refreshing state... [id=113502124991]
sakuracloud_disk.k8s_control_plane_disk[1]: Refreshing state... [id=113502124989]
sakuracloud_disk.k8s_worker_node_disk[4]: Refreshing state... [id=113502124998]
sakuracloud_disk.k8s_worker_node_disk[3]: Refreshing state... [id=113502125009]
sakuracloud_disk.k8s_worker_node_disk[6]: Refreshing state... [id=113502124997]
sakuracloud_disk.k8s_worker_node_disk[5]: Refreshing state... [id=113502124992]
sakuracloud_disk.k8s_worker_node_disk[0]: Refreshing state... [id=113502124996]
sakuracloud_disk.k8s_worker_node_disk[2]: Refreshing state... [id=113502125007]
sakuracloud_disk.k8s_worker_node_disk[1]: Refreshing state... [id=113502124995]
sakuracloud_disk.k8s_worker_node_disk[7]: Refreshing state... [id=113502124994]
sakuracloud_server.k8s_router[0]: Refreshing state... [id=113502125013]
sakuracloud_server.k8s_worker_node[0]: Refreshing state... [id=113502125033]
sakuracloud_server.k8s_control_plane[1]: Refreshing state... [id=113502125046]
sakuracloud_server.k8s_worker_node[4]: Refreshing state... [id=113502125036]
sakuracloud_server.k8s_worker_node[1]: Refreshing state... [id=113502125031]
sakuracloud_server.k8s_control_plane[0]: Refreshing state... [id=113502125072]
sakuracloud_server.k8s_worker_node[3]: Refreshing state... [id=113502125032]
sakuracloud_server.k8s_worker_node[2]: Refreshing state... [id=113502125028]
sakuracloud_server.k8s_control_plane[2]: Refreshing state... [id=113502125068]
sakuracloud_server.k8s_worker_node[6]: Refreshing state... [id=113502125026]
sakuracloud_server.k8s_worker_node[7]: Refreshing state... [id=113502125029]
sakuracloud_server.k8s_worker_node[5]: Refreshing state... [id=113502125027]
sakuracloud_subnet.bgp_subnet: Refreshing state... [id=15685]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # sakuracloud_disk.k8s_control_plane_disk[0] must be replaced
-/+ resource "sakuracloud_disk" "k8s_control_plane_disk" {
      ~ id                = "113502125012" -> (known after apply)
        name              = "k8s-dev-control-plane-1"
      ~ server_id         = "113502125072" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_control_plane_disk[1] must be replaced
-/+ resource "sakuracloud_disk" "k8s_control_plane_disk" {
      ~ id                = "113502124989" -> (known after apply)
        name              = "k8s-dev-control-plane-2"
      ~ server_id         = "113502125046" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_control_plane_disk[2] must be replaced
-/+ resource "sakuracloud_disk" "k8s_control_plane_disk" {
      ~ id                = "113502124991" -> (known after apply)
        name              = "k8s-dev-control-plane-3"
      ~ server_id         = "113502125068" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_router_disk[0] must be replaced
-/+ resource "sakuracloud_disk" "k8s_router_disk" {
      ~ id                = "113502124982" -> (known after apply)
        name              = "k8s-dev-router-1"
      ~ server_id         = "113502125013" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[0] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502124996" -> (known after apply)
        name              = "k8s-dev-worker-node-1"
      ~ server_id         = "113502125033" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[1] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502124995" -> (known after apply)
        name              = "k8s-dev-worker-node-2"
      ~ server_id         = "113502125031" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[2] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502125007" -> (known after apply)
        name              = "k8s-dev-worker-node-3"
      ~ server_id         = "113502125028" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[3] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502125009" -> (known after apply)
        name              = "k8s-dev-worker-node-4"
      ~ server_id         = "113502125032" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[4] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502124998" -> (known after apply)
        name              = "k8s-dev-worker-node-5"
      ~ server_id         = "113502125036" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[5] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502124992" -> (known after apply)
        name              = "k8s-dev-worker-node-6"
      ~ server_id         = "113502125027" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[6] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502124997" -> (known after apply)
        name              = "k8s-dev-worker-node-7"
      ~ server_id         = "113502125026" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[7] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                = "113502124994" -> (known after apply)
        name              = "k8s-dev-worker-node-8"
      ~ server_id         = "113502125029" -> (known after apply)
      ~ source_archive_id = "113402076879" -> "113402076881" # forces replacement
        tags              = [
            "dev",
            "k8s",
        ]
      ~ zone              = "tk1b" -> (known after apply)
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_control_plane[0] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_control_plane" {
      ~ disks            = [
          - "113502125012",
        ] -> (known after apply)
        id               = "113502125072"
        name             = "k8s-dev-control-plane-1"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_control_plane[1] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_control_plane" {
      ~ disks            = [
          - "113502124989",
        ] -> (known after apply)
        id               = "113502125046"
        name             = "k8s-dev-control-plane-2"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_control_plane[2] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_control_plane" {
      ~ disks            = [
          - "113502124991",
        ] -> (known after apply)
        id               = "113502125068"
        name             = "k8s-dev-control-plane-3"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_router[0] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_router" {
      ~ disks            = [
          - "113502124982",
        ] -> (known after apply)
        id               = "113502125013"
        name             = "k8s-dev-router-1"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[0] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502124996",
          - "113502124988",
        ] -> (known after apply)
        id               = "113502125033"
        name             = "k8s-dev-worker-node-1"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[1] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502124995",
          - "113502124983",
        ] -> (known after apply)
        id               = "113502125031"
        name             = "k8s-dev-worker-node-2"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[2] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502125007",
          - "113502124987",
        ] -> (known after apply)
        id               = "113502125028"
        name             = "k8s-dev-worker-node-3"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[3] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502125009",
          - "113502124985",
        ] -> (known after apply)
        id               = "113502125032"
        name             = "k8s-dev-worker-node-4"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[4] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502124998",
          - "113502125008",
        ] -> (known after apply)
        id               = "113502125036"
        name             = "k8s-dev-worker-node-5"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[5] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502124992",
          - "113502124981",
        ] -> (known after apply)
        id               = "113502125027"
        name             = "k8s-dev-worker-node-6"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[6] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502124997",
          - "113502125010",
        ] -> (known after apply)
        id               = "113502125026"
        name             = "k8s-dev-worker-node-7"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_worker_node[7] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks            = [
          - "113502124994",
          - "113502125011",
        ] -> (known after apply)
        id               = "113502125029"
        name             = "k8s-dev-worker-node-8"
        tags             = [
            "@nic-double-queue",
            "dev",
            "k8s",
        ]
        # (12 unchanged attributes hidden)

        # (4 unchanged blocks hidden)
    }

Plan: 12 to add, 12 to change, 12 to destroy.

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@logica0419 logica0419 merged commit 1454284 into main Jan 5, 2024
14 checks passed
@logica0419 logica0419 deleted the renovate/flake8-7.x branch January 5, 2024 07:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant