Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always need to delete minikube - context deadline exceeded #18214

Closed
mstimvol opened this issue Feb 20, 2024 · 4 comments
Closed

Always need to delete minikube - context deadline exceeded #18214

mstimvol opened this issue Feb 20, 2024 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mstimvol
Copy link

What Happened?

I'm using minikube on Windows with Hyper-V - because I had too much trouble with VirtualBox. But I recently updated my minikube installation and when I run minikube start it always says:

! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
  - Generiere Zertifikate und Schlüssel ...
  - Starte Control-Plane ...
! Initialisierung fehlgeschlagen, versuche erneut: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

When I delete the cluster and start it again, everything works well until I restart Windows. When I then run minikube start again to restore the cluster, it fails with the message above. Then I need to run minikube delete and minikube start again and everything works fine.

When I run with minikube start --alsologtostderr then I see the following errors:

| I0220 08:12:46.889095   26876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0220 08:12:46.894928   26876 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0220 08:12:46.895972   26876 kubeadm.go:636] restartCluster start
I0220 08:12:46.902438   26876 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0220 08:12:46.908716   26876 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0220 08:12:46.909754   26876 kubeconfig.go:92] found "minikube" server: "https://172.23.246.2:8443"
I0220 08:12:46.915721   26876 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0220 08:12:46.921911   26876 api_server.go:166] Checking apiserver status ...
I0220 08:12:46.925158   26876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0220 08:12:46.932098   26876 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
I0220 08:12:46.932098   26876 api_server.go:166] Checking apiserver status ...
I0220 08:12:46.936403   26876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0220 08:12:46.944164   26876 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
/ I0220 08:12:47.444889   26876 api_server.go:166] Checking apiserver status ...
I0220 08:12:47.448186   26876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0220 08:12:47.455277   26876 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
- I0220 08:12:47.946520   26876 api_server.go:166] Checking apiserver status ...
I0220 08:12:47.950248   26876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0220 08:12:47.958503   26876 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

Followed by:

I0220 08:13:03.518881   26876 api_server.go:52] waiting for apiserver process to appear ...
I0220 08:13:03.522040   26876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0220 08:13:03.529612   26876 api_server.go:72] duration metric: took 10.7315ms to wait for apiserver process to appear ...
I0220 08:13:03.529612   26876 api_server.go:88] waiting for apiserver healthz status ...
I0220 08:13:03.530118   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
| I0220 08:13:08.534516   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:08.534516   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
/ I0220 08:13:13.537885   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
- I0220 08:13:14.038621   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
| I0220 08:13:19.040168   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:19.040220   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
- I0220 08:13:24.042612   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:24.042612   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
\ I0220 08:13:29.044382   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:29.044382   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
/ I0220 08:13:34.045620   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:34.045620   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
- I0220 08:13:39.047601   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:39.047713   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
| I0220 08:13:44.049829   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:44.049829   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
- I0220 08:13:49.052201   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:49.052312   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...
\ I0220 08:13:54.054270   26876 api_server.go:269] stopped: https://172.23.246.2:8443/healthz: Get "https://172.23.246.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0220 08:13:54.054270   26876 api_server.go:253] Checking apiserver healthz at https://172.23.246.2:8443/healthz ...

Any idea how to solve this issue?

Attach the log file

log.txt

Operating System

Windows

Driver

Hyper-V

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants