You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
During kubespray, the following task seems to fail intermittently. Now, this could be considered an upstream bug because kubespray is not modifying MaxSessions on the first kubernetes control plane node, but.. should it?
Seen while running cluster.yml to add a few additional compute nodes:
TASK [kubernetes/kubeadm : Create kubeadm token for joining nodes with 24h expiration (default)] **********************************************************************************************************************************************
fatal: [compute016 -> kubernetes01]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"kubernetes01\". Make sure this host can be reached over ssh: mux_client_request_session: session request failed: Session open refused by peer\r\nkex_exchange_identification: Connection closed by remote host\r\nConnection closed by 172.24.9.61 port 22\r\n", "unreachable": true}
Observed on the kubernetes01 node in question:
Apr 24 22:55:58 kubernetes01 sshd[1208808]: error: no more sessions
Apr 24 22:55:58 kubernetes01 sshd[1208808]: error: no more sessions
Apr 24 22:55:58 kubernetes01 sshd[1208808]: error: no more sessions
Apr 24 22:55:58 kubernetes01 sshd[1208808]: error: no more sessions
Issue was resolved by increasing MaxSessions and MaxStartups in /etc/ssh/sshd_config on the kubernetes01 node.
Describe the bug
During kubespray, the following task seems to fail intermittently. Now, this could be considered an upstream bug because kubespray is not modifying MaxSessions on the first kubernetes control plane node, but.. should it?
Seen while running cluster.yml to add a few additional compute nodes:
Observed on the kubernetes01 node in question:
Issue was resolved by increasing
MaxSessions
andMaxStartups
in /etc/ssh/sshd_config on the kubernetes01 node.Related stack-exchange ref: https://unix.stackexchange.com/a/22987
To Reproduce
Steps to reproduce the behavior:
Kubespray a large group of nodes, some amount over 10.
Expected behavior
No failure to connect via ssh from cluster member nodes to the kubernetes control plane node.
Screenshots
If applicable, add screenshots to help explain your problem.
Server (please complete the following information):
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: