Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exiting due to MK_ADDON_ENABLE: enable failed #17417

Closed
ananthforu opened this issue Oct 13, 2023 · 9 comments
Closed

Exiting due to MK_ADDON_ENABLE: enable failed #17417

ananthforu opened this issue Oct 13, 2023 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ananthforu
Copy link

What Happened?

Getting the below eror when trying to execute "minikube addons enable ingress ". Before this command the command "kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml" executed correctly.

~ % minikube addons enable ingress
💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
💡 After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
▪ Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
🔎 Verifying ingress addon...
logs.txt

❌ Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]

Macbook Air, M2 chip and the OS version is Sonoma 14.0

Attach the log file

logs.txt

Operating System

macOS (Default)

Driver

Docker

@StLeoX
Copy link

StLeoX commented Oct 15, 2023

Might cause by ImagePullBackOff.
But I changed the insecure-registry config, this problem still exists.

@PasaOpasen
Copy link

Has same problem on Fedora 36

Oct 26 18:34:13 minikube kubelet[2386]: E1026 18:34:13.598495    2386 secret.go:194] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Oct 26 18:34:13 minikube kubelet[2386]: E1026 18:34:13.598592    2386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6f0305-1fbe-4e86-a92f-85d30036bd9a-webhook-cert podName:6f6f0305-1fbe-4e86-a92f-85d30036bd9a nodeName:}" failed. No retries permitted until 2023-10-26 18:36:15.598565747 +0000 UTC m=+385.385750380 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6f6f0305-1fbe-4e86-a92f-85d30036bd9a-webhook-cert") pod "ingress-nginx-controller-6cc5ccb977-hvbln" (UID: "6f6f0305-1fbe-4e86-a92f-85d30036bd9a") : secret "ingress-nginx-admission" not found
Oct 26 18:34:24 minikube kubelet[2386]: E1026 18:34:24.282691    2386 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert kube-api-access-rpgwp]: timed out waiting for the condition" pod="ingress-nginx/ingress-nginx-controller-6cc5ccb977-hvbln"
Oct 26 18:34:24 minikube kubelet[2386]: E1026 18:34:24.282748    2386 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert kube-api-access-rpgwp]: timed out waiting for the condition" pod="ingress-nginx/ingress-nginx-controller-6cc5ccb977-hvbln" podUID=6f6f0305-1fbe-4e86-a92f-85d30036bd9a

@adolphTech
Copy link

I had the same issue but on Ubuntu 22.

` minikube addons enable ingress
💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
🔎 Verifying ingress addon...

❌ Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]

`

This is how i fixed it

  1. minikube delete --all
  2. minikube start --driver=docker --vm=true

here was the response 😄 minikube v1.32.0 on Ubuntu 22.04 ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... ❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image 🤷 docker "minikube" container is missing, will recreate. 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... 🐳 Preparing Kubernetes v1.28.3 on Docker 24.0.7 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔗 Configuring bridge CNI (Container Networking Interface) ... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🔎 Verifying Kubernetes components... 🌟 Enabled addons: default-storageclass, storage-provisioner 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

  1. Then run this minikube addons enable ingress
  2. Here was the response 💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0 🔎 Verifying ingress addon... 🌟 The 'ingress' addon is enabled

@ananthforu
Copy link
Author

Apology for delayed reply. In Mac M2 the similar steps not helped to solve this issue. I noticed many similar issues but no proper answers yet.

@post-human-world
Copy link

I came across same issue, and i solved through deleted ingress namespace, this action lets me reinstall ingress with a cleaning environment. BTW, if you occur some similar issue, i think you should open your dashboard and check which resource is going red, so you can figure out what is happening through more detailed error message

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants