Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weโ€™ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabaling default storage is giving error #19448

Open
AbbasRabbani opened this issue Aug 15, 2024 · 3 comments
Open

Enabaling default storage is giving error #19448

AbbasRabbani opened this issue Aug 15, 2024 · 3 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@AbbasRabbani
Copy link

What Happened?

afzal@isg-system-product-name:~$ minikube start
๐Ÿ˜„ minikube v1.33.1 on Ubuntu 22.04
โœจ Automatically selected the qemu2 driver. Other choices: none, ssh
๐ŸŒ Automatically selected the builtin network
โ— You are using the QEMU driver without a dedicated network, which doesn't support minikube service & minikube tunnel commands.
๐Ÿ’ฟ Downloading VM boot image ...
> minikube-v1.33.1-amd64.iso....: 65 B / 65 B [---------] 100.00% ? p/s 0s
> minikube-v1.33.1-amd64.iso: 314.16 MiB / 314.16 MiB 100.00% 10.90 MiB p
๐Ÿ‘ Starting "minikube" primary control-plane node in "minikube" cluster
๐Ÿ’พ Downloading Kubernetes v1.30.0 preload ...
> preloaded-images-k8s-v18-v1...: 342.90 MiB / 342.90 MiB 100.00% 10.90 M
๐Ÿ”ฅ Creating qemu2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
๐Ÿณ Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
โ–ช Generating certificates and keys ...
โ–ช Booting up control plane ...
โ–ช Configuring RBAC rules ...
๐Ÿ”— Configuring bridge CNI (Container Networking Interface) ...
๐Ÿ”Ž Verifying Kubernetes components...
โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5
E0815 16:15:54.065353 879364 start.go:159] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://localhost:37911/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 127.0.0.1:37911: connect: connection refused
โ— Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://localhost:37911/apis/storage.k8s.io/v1/storageclasses": dial tcp 127.0.0.1:37911: connect: connection refused]
๐ŸŒŸ Enabled addons: storage-provisioner

โŒ Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ โ”‚
โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚
โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚
โ”‚ โ”‚
โ”‚ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. โ”‚
โ”‚ โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

Attach the log file

coredns-7db6d8ff4d-89bjk"
Aug 15 14:16:07 minikube kubelet[1926]: I0815 14:16:07.854767 1926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-7j7fb" (UniqueName: "kubernetes.io/projected/d8cd6cf4-ea12-4764-9f75-d2f03c00eb3f-kube-api-access-7j7fb") pod "coredns-7db6d8ff4d-89bjk" (UID: "d8cd6cf4-ea12-4764-9f75-d2f03c00eb3f") " pod="kube-system/coredns-7db6d8ff4d-89bjk"
Aug 15 14:16:07 minikube kubelet[1926]: I0815 14:16:07.854814 1926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ef7c3de0-9b8c-4f0d-92f2-524d8e7111aa-config-volume") pod "coredns-7db6d8ff4d-68xwg" (UID: "ef7c3de0-9b8c-4f0d-92f2-524d8e7111aa") " pod="kube-system/coredns-7db6d8ff4d-68xwg"
Aug 15 14:16:07 minikube kubelet[1926]: I0815 14:16:07.854836 1926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-jrpwb" (UniqueName: "kubernetes.io/projected/ef7c3de0-9b8c-4f0d-92f2-524d8e7111aa-kube-api-access-jrpwb") pod "coredns-7db6d8ff4d-68xwg" (UID: "ef7c3de0-9b8c-4f0d-92f2-524d8e7111aa") " pod="kube-system/coredns-7db6d8ff4d-68xwg"
Aug 15 14:16:07 minikube kubelet[1926]: I0815 14:16:07.854847 1926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d8cd6cf4-ea12-4764-9f75-d2f03c00eb3f-config-volume") pod "coredns-7db6d8ff4d-89bjk" (UID: "d8cd6cf4-ea12-4764-9f75-d2f03c00eb3f") " pod="kube-system/coredns-7db6d8ff4d-89bjk"
Aug 15 14:16:07 minikube kubelet[1926]: I0815 14:16:07.983083 1926 scope.go:117] "RemoveContainer" containerID="e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275"
Aug 15 14:16:08 minikube kubelet[1926]: I0815 14:16:08.989226 1926 scope.go:117] "RemoveContainer" containerID="e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275"
Aug 15 14:16:08 minikube kubelet[1926]: I0815 14:16:08.989409 1926 scope.go:117] "RemoveContainer" containerID="6725014e3e483a980fdba2b798dd918af04110f40d4dcbd8ffae06b6d58b081f"
Aug 15 14:16:08 minikube kubelet[1926]: E0815 14:16:08.989517 1926 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(48a786bf-bffe-46b1-803b-72ab3579e9b1)"" pod="kube-system/storage-provisioner" podUID="48a786bf-bffe-46b1-803b-72ab3579e9b1"
Aug 15 14:16:09 minikube kubelet[1926]: I0815 14:16:09.006361 1926 scope.go:117] "RemoveContainer" containerID="e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275"
Aug 15 14:16:09 minikube kubelet[1926]: E0815 14:16:09.008589 1926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275" containerID="e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275"
Aug 15 14:16:09 minikube kubelet[1926]: I0815 14:16:09.008606 1926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275"} err="failed to get container status "e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275": rpc error: code = Unknown desc = Error response from daemon: No such container: e0ce59a867b3dd263a17835bf82e27ff35c0e3ac6a23aff4a99ab2507c3df275"
Aug 15 14:16:09 minikube kubelet[1926]: I0815 14:16:09.028153 1926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-68xwg" podStartSLOduration=2.028144725 podStartE2EDuration="2.028144725s" podCreationTimestamp="2024-08-15 14:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 14:16:09.027708707 +0000 UTC m=+16.213613175" watchObservedRunningTime="2024-08-15 14:16:09.028144725 +0000 UTC m=+16.214049193"
Aug 15 14:16:09 minikube kubelet[1926]: I0815 14:16:09.028195 1926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-89bjk" podStartSLOduration=2.028193157 podStartE2EDuration="2.028193157s" podCreationTimestamp="2024-08-15 14:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 14:16:09.013910942 +0000 UTC m=+16.199815419" watchObservedRunningTime="2024-08-15 14:16:09.028193157 +0000 UTC m=+16.214097625"
Aug 15 14:16:09 minikube kubelet[1926]: I0815 14:16:09.038745 1926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xvx9h" podStartSLOduration=2.038738683 podStartE2EDuration="2.038738683s" podCreationTimestamp="2024-08-15 14:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 14:16:09.038572888 +0000 UTC m=+16.224477366" watchObservedRunningTime="2024-08-15 14:16:09.038738683 +0000 UTC m=+16.224643151"
Aug 15 14:16:13 minikube kubelet[1926]: I0815 14:16:13.404547 1926 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Aug 15 14:16:13 minikube kubelet[1926]: I0815 14:16:13.404881 1926 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Aug 15 14:16:24 minikube kubelet[1926]: I0815 14:16:24.888040 1926 scope.go:117] "RemoveContainer" containerID="6725014e3e483a980fdba2b798dd918af04110f40d4dcbd8ffae06b6d58b081f"
Aug 15 14:16:52 minikube kubelet[1926]: E0815 14:16:52.892987 1926 iptables.go:577] "Could not set up iptables canary" err=<
Aug 15 14:16:52 minikube kubelet[1926]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 15 14:16:52 minikube kubelet[1926]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table nat': Table does not exist (do you need to insmod?) Aug 15 14:16:52 minikube kubelet[1926]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 15 14:16:52 minikube kubelet[1926]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 15 14:17:52 minikube kubelet[1926]: E0815 14:17:52.893894 1926 iptables.go:577] "Could not set up iptables canary" err=< Aug 15 14:17:52 minikube kubelet[1926]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 15 14:17:52 minikube kubelet[1926]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table nat': Table does not exist (do you need to insmod?)
Aug 15 14:17:52 minikube kubelet[1926]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 15 14:17:52 minikube kubelet[1926]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 15 14:18:52 minikube kubelet[1926]: E0815 14:18:52.892746 1926 iptables.go:577] "Could not set up iptables canary" err=<
Aug 15 14:18:52 minikube kubelet[1926]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 15 14:18:52 minikube kubelet[1926]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table nat': Table does not exist (do you need to insmod?) Aug 15 14:18:52 minikube kubelet[1926]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 15 14:18:52 minikube kubelet[1926]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 15 14:19:52 minikube kubelet[1926]: E0815 14:19:52.893658 1926 iptables.go:577] "Could not set up iptables canary" err=< Aug 15 14:19:52 minikube kubelet[1926]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 15 14:19:52 minikube kubelet[1926]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table nat': Table does not exist (do you need to insmod?)
Aug 15 14:19:52 minikube kubelet[1926]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 15 14:19:52 minikube kubelet[1926]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 15 14:20:52 minikube kubelet[1926]: E0815 14:20:52.892812 1926 iptables.go:577] "Could not set up iptables canary" err=<
Aug 15 14:20:52 minikube kubelet[1926]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 15 14:20:52 minikube kubelet[1926]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table nat': Table does not exist (do you need to insmod?) Aug 15 14:20:52 minikube kubelet[1926]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 15 14:20:52 minikube kubelet[1926]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 15 14:21:52 minikube kubelet[1926]: E0815 14:21:52.892836 1926 iptables.go:577] "Could not set up iptables canary" err=< Aug 15 14:21:52 minikube kubelet[1926]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 15 14:21:52 minikube kubelet[1926]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table nat': Table does not exist (do you need to insmod?)
Aug 15 14:21:52 minikube kubelet[1926]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 15 14:21:52 minikube kubelet[1926]: > table="nat" chain="KUBE-KUBELET-CANARY"

Operating System

Ubuntu

Driver

Docker

@asusikai
Copy link

Hello. This might not be exactly the same issue, but I wanted to share a potential solution I found that could help your situation.
It might arise when using the default Persistent Volume (PV) created by Minikube.

I was trying to use Jenkins with Helm, but the pod kept entering a CrashLoopBackOff state. The issue was resolved by changing the permissions of the directory /tmp/hostpath-provisioner/default/my-jenkins on the pod from 755 to 777.

Good luck!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants