Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting with QEMU storage-provisioner sometimes fails to enable #17396

Closed
spowelljr opened this issue Oct 10, 2023 · 2 comments · Fixed by #17402
Closed

Starting with QEMU storage-provisioner sometimes fails to enable #17396

spowelljr opened this issue Oct 10, 2023 · 2 comments · Fixed by #17402
Assignees
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/qemu-driver QEMU related issues kind/bug Categorizes issue or PR as related to a bug.

Comments

@spowelljr
Copy link
Member

spowelljr commented Oct 10, 2023

Setup:

  • M1 Mac
  • QEMU driver

Was able to replicate this on another M1 Mac, I'll try other configurations to see if it's general or not, it fails on both builtin and socket_vmnet networks

This is flakey for me, sometimes it starts and others it doesn't

From the output it looks completely fine:

$ minikube start --driver qemu --network socket_vmnet
😄  minikube v1.31.2 on Darwin 13.6 (arm64)
✨  Using the qemu2 driver based on user configuration
🌐  Automatically selected the socket_vmnet network
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | 3rd party (Ambassador)         |
| auto-pause                  | minikube | disabled     | minikube                       |
| cloud-spanner               | minikube | disabled     | Google                         |
| csi-hostpath-driver         | minikube | disabled     | Kubernetes                     |
| dashboard                   | minikube | disabled     | Kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | Kubernetes                     |
| efk                         | minikube | disabled     | 3rd party (Elastic)            |
| freshpod                    | minikube | disabled     | Google                         |
| gcp-auth                    | minikube | disabled     | Google                         |
| gvisor                      | minikube | disabled     | minikube                       |
| headlamp                    | minikube | disabled     | 3rd party (kinvolk.io)         |
| helm-tiller                 | minikube | disabled     | 3rd party (Helm)               |
| inaccel                     | minikube | disabled     | 3rd party (InAccel             |
|                             |          |              | [[email protected]])            |
| ingress                     | minikube | disabled     | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | minikube                       |
| inspektor-gadget            | minikube | disabled     | 3rd party                      |
|                             |          |              | (inspektor-gadget.io)          |
| istio                       | minikube | disabled     | 3rd party (Istio)              |
| istio-provisioner           | minikube | disabled     | 3rd party (Istio)              |
| kong                        | minikube | disabled     | 3rd party (Kong HQ)            |
| kubeflow                    | minikube | disabled     | 3rd party                      |
| kubevirt                    | minikube | disabled     | 3rd party (KubeVirt)           |
| logviewer                   | minikube | disabled     | 3rd party (unknown)            |
| metallb                     | minikube | disabled     | 3rd party (MetalLB)            |
| metrics-server              | minikube | disabled     | Kubernetes                     |
| nvidia-device-plugin        | minikube | disabled     | 3rd party (NVIDIA)             |
| nvidia-driver-installer     | minikube | disabled     | 3rd party (Nvidia)             |
| nvidia-gpu-device-plugin    | minikube | disabled     | 3rd party (Nvidia)             |
| olm                         | minikube | disabled     | 3rd party (Operator Framework) |
| pod-security-policy         | minikube | disabled     | 3rd party (unknown)            |
| portainer                   | minikube | disabled     | 3rd party (Portainer.io)       |
| registry                    | minikube | disabled     | minikube                       |
| registry-aliases            | minikube | disabled     | 3rd party (unknown)            |
| registry-creds              | minikube | disabled     | 3rd party (UPMC Enterprises)   |
| storage-provisioner         | minikube | enabled ✅   | minikube                       |
| storage-provisioner-gluster | minikube | disabled     | 3rd party (Gluster)            |
| storage-provisioner-rancher | minikube | disabled     | 3rd party (Rancher)            |
| volumesnapshots             | minikube | disabled     | Kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

But if you check the pods it's not running

$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-5dd5756b68-ztv94           1/1     Running   0          4m53s
kube-system   etcd-minikube                      1/1     Running   0          5m6s
kube-system   kube-apiserver-minikube            1/1     Running   0          5m7s
kube-system   kube-controller-manager-minikube   1/1     Running   0          5m6s
kube-system   kube-proxy-gf8r5                   1/1     Running   0          4m53s
kube-system   kube-scheduler-minikube            1/1     Running   0          5m6s

Looking at the start logs:

I1010 16:32:35.316388   71687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W1010 16:32:35.316720   71687 host.go:54] host status for "minikube" returned error: state: connect: dial unix /Users/powellsteven/.minikube/machines/minikube/monitor: connect: connection refused
W1010 16:32:35.316738   71687 addons.go:277] "minikube" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
@spowelljr spowelljr added kind/bug Categorizes issue or PR as related to a bug. co/qemu-driver QEMU related issues addon/storage-provisioner Issues relating to storage provisioner addon labels Oct 10, 2023
@spowelljr
Copy link
Member Author

I tried 10 times on a non-M1 Mac and I'm not running into the issue

@spowelljr
Copy link
Member Author

spowelljr commented Oct 11, 2023

I tried adding a retry with a 1 second timeout to QEMUs d.RunQMPCommand("query-status") command. I tested it multiple times, and if it fails the first time, it always works on the second attempt, seems like the monitor just might not be accepting connections for a second and a retry seems to resolve it. I noticed that QEMU starts much faster on M1 than on an Intel Mac (34 seconds vs 1 min 26 seconds) so I'm thinking that might be causing the difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/qemu-driver QEMU related issues kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant