Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi node demo fails #16668

Closed
tholewebgods opened this issue Jun 10, 2023 · 6 comments
Closed

Multi node demo fails #16668

tholewebgods opened this issue Jun 10, 2023 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tholewebgods
Copy link

What Happened?

When trying to start a multi node (multinode) demo I run into a timeout error.

Guide: https://minikube.sigs.k8s.io/docs/tutorials/multi_node/

I'm on the high end of RAM usage, but there's still a couple of GB RAM left when minikube start is at peak shortly before failing.

A single-node start works without problems.

Please give me some assistance.

Attach the log file

[Install]
config:
{KubernetesVersion:v1.26.3 ClusterName:multinode-demo Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0610 19:24:59.371632 154352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
I0610 19:24:59.376365 154352 binaries.go:44] Found k8s binaries, skipping transfer
I0610 19:24:59.376395 154352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0610 19:24:59.380984 154352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
I0610 19:24:59.390200 154352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0610 19:24:59.399127 154352 ssh_runner.go:195] Run: grep 10.0.2.15 control-plane.minikube.internal$ /etc/hosts
I0610 19:24:59.401493 154352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0610 19:24:59.407267 154352 host.go:66] Checking if "multinode-demo" exists ...
I0610 19:24:59.407409 154352 config.go:182] Loaded profile config "multinode-demo": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0610 19:24:59.407419 154352 start.go:301] JoinCluster: &{Name:multinode-demo KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.30.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:45767 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:multinode-demo Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:10.0.2.15 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dev:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0610 19:24:59.407464 154352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm token create --print-join-command --ttl=0"
I0610 19:24:59.407472 154352 sshutil.go:53] new ssh client: &{IP:localhost Port:40665 SSHKeyPath:/home/dev/.minikube/machines/multinode-demo/id_rsa Username:docker}
I0610 19:24:59.530821 154352 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:10.0.2.15 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}
I0610 19:24:59.530864 154352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4x646u.s050ly2ookbl0j7h --discovery-token-ca-cert-hash sha256:a3a9b9e44ba9e070db4b5589c7722d020f854539e70c5e5918f0c9bc1d8a80ca --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-demo-m02"
I0610 19:29:59.777911 154352 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4x646u.s050ly2ookbl0j7h --discovery-token-ca-cert-hash sha256:a3a9b9e44ba9e070db4b5589c7722d020f854539e70c5e5918f0c9bc1d8a80ca --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-demo-m02": (5m0.246963577s)
E0610 19:29:59.778004 154352 start.go:324] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4x646u.s050ly2ookbl0j7h --discovery-token-ca-cert-hash sha256:a3a9b9e44ba9e070db4b5589c7722d020f854539e70c5e5918f0c9bc1d8a80ca --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-demo-m02": Process exited with status 1
stdout:
[preflight] Running pre-flight checks

stderr:
W0610 17:24:59.637700 1216 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: couldn't validate the identity of the API Server: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp 10.0.2.15:8443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I0610 19:29:59.778078 154352 start.go:327] resetting worker node "m02" before attempting to rejoin cluster...
I0610 19:29:59.778102 154352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --force"
I0610 19:29:59.823895 154352 start.go:331] successfully reset worker node "m02"
I0610 19:29:59.823958 154352 start.go:303] JoinCluster complete in 5m0.416539449s
I0610 19:29:59.833679 154352 out.go:177]
W0610 19:29:59.834553 154352 out.go:239] ❌ Exiting due to GUEST_START: failed to start node: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4x646u.s050ly2ookbl0j7h --discovery-token-ca-cert-hash sha256:a3a9b9e44ba9e070db4b5589c7722d020f854539e70c5e5918f0c9bc1d8a80ca --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-demo-m02": Process exited with status 1
stdout:
[preflight] Running pre-flight checks

stderr:
W0610 17:24:59.637700 1216 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: couldn't validate the identity of the API Server: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp 10.0.2.15:8443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher

W0610 19:29:59.834575 154352 out.go:239]
W0610 19:29:59.835399 154352 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m 😿 If the above advice does not help, please let us know: �[31m│�[0m
�[31m│�[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I0610 19:29:59.836444 154352 out.go:177]

Operating System

None

Driver

None

@tholewebgods
Copy link
Author

Logfile limited because the form gave the error comment limited to 65535 characters.

@tholewebgods
Copy link
Author

minikube-logs.log

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 22, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants