Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition #1438

Closed
hreidar opened this issue Mar 7, 2019 · 24 comments
Labels
area/HA help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Milestone

Comments

@hreidar
Copy link

hreidar commented Mar 7, 2019

Hi, I'm trying out kubeadm and I'm using the official docs in setting up a HA cluster. I have managed to create a etcd cluster but the init step on the first master node is failing with the following error:

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

What keywords did you search in kubeadm issues before filing this one?

error execution phase upload-config/kubelet

and got:

#1382
#1227

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

root@k8s-c3-m1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:        16.04
Codename:       xenial

root@k8s-c3-m1:~# uname -a
Linux k8s-c3-m1 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

root@k8s-c3-m1:~# docker --version
Docker version 18.06.1-ce, build e68fc7a
root@k8s-c3-m1:~#

root@k8s-c3-m1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

root@k8s-c3-m1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Note: Also tryed Docker version 17.03.3-ce and 18.09.3-ce

I'm running this on vm's in vmware.

What happened?

root@k8s-c3-m1:~# kubeadm init --config /root/kubeadmcfg.yaml -v 256
I0304 14:52:28.103162    1391 initconfiguration.go:169] loading configuration from the given file
I0304 14:52:28.107089    1391 interface.go:384] Looking for default routes with IPv4 addresses
I0304 14:52:28.107141    1391 interface.go:389] Default route transits interface "eth0"
I0304 14:52:28.107440    1391 interface.go:196] Interface eth0 is up
I0304 14:52:28.107587    1391 interface.go:244] Interface "eth0" has 1 addresses :[10.10.10.93/24].
I0304 14:52:28.107695    1391 interface.go:211] Checking addr  10.10.10.93/24.
I0304 14:52:28.107724    1391 interface.go:218] IP found 10.10.10.93
I0304 14:52:28.107759    1391 interface.go:250] Found valid IPv4 address 10.10.10.93 for interface "eth0".
I0304 14:52:28.107791    1391 interface.go:395] Found active IP 10.10.10.93 
I0304 14:52:28.107979    1391 version.go:163] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable.txt
I0304 14:52:29.493555    1391 feature_gate.go:206] feature gates: &{map[]}
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
I0304 14:52:29.494477    1391 checks.go:572] validating Kubernetes and kubeadm version
I0304 14:52:29.494609    1391 checks.go:171] validating if the firewall is enabled and active
I0304 14:52:29.506263    1391 checks.go:208] validating availability of port 6443
I0304 14:52:29.506767    1391 checks.go:208] validating availability of port 10251
I0304 14:52:29.507110    1391 checks.go:208] validating availability of port 10252
I0304 14:52:29.507454    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0304 14:52:29.507728    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0304 14:52:29.507959    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0304 14:52:29.508140    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0304 14:52:29.508316    1391 checks.go:430] validating if the connectivity type is via proxy or direct
I0304 14:52:29.508504    1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.508798    1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.509053    1391 checks.go:104] validating the container runtime
I0304 14:52:29.749661    1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:29.778962    1391 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0304 14:52:29.779324    1391 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0304 14:52:29.779573    1391 checks.go:644] validating whether swap is enabled or not
I0304 14:52:29.779818    1391 checks.go:373] validating the presence of executable ip
I0304 14:52:29.780044    1391 checks.go:373] validating the presence of executable iptables
I0304 14:52:29.780251    1391 checks.go:373] validating the presence of executable mount
I0304 14:52:29.780465    1391 checks.go:373] validating the presence of executable nsenter
I0304 14:52:29.780674    1391 checks.go:373] validating the presence of executable ebtables
I0304 14:52:29.780925    1391 checks.go:373] validating the presence of executable ethtool
I0304 14:52:29.781018    1391 checks.go:373] validating the presence of executable socat
I0304 14:52:29.781221    1391 checks.go:373] validating the presence of executable tc
I0304 14:52:29.781415    1391 checks.go:373] validating the presence of executable touch
I0304 14:52:29.781647    1391 checks.go:515] running all checks
I0304 14:52:29.838382    1391 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I0304 14:52:29.838876    1391 checks.go:613] validating kubelet version
I0304 14:52:29.983771    1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:30.011507    1391 checks.go:208] validating availability of port 10250
I0304 14:52:30.011951    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/etcd/ca.crt
I0304 14:52:30.012301    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.crt
I0304 14:52:30.012360    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.key
I0304 14:52:30.012408    1391 checks.go:685] validating the external etcd version
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0304 14:52:30.238175    1391 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.4
I0304 14:52:30.378446    1391 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.4
I0304 14:52:30.560185    1391 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.4
I0304 14:52:30.745876    1391 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.4
I0304 14:52:30.930200    1391 checks.go:833] image exists: k8s.gcr.io/pause:3.1
I0304 14:52:31.096902    1391 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6
I0304 14:52:31.097108    1391 kubelet.go:71] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0304 14:52:31.256217    1391 kubelet.go:89] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0304 14:52:31.530165    1391 certs.go:113] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-c3-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.93 10.10.10.76 127.0.0.1 10.10.10.90 10.10.10.91 10.10.10.92 10.10.10.76]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Using existing etcd/ca keyless certificate authority[certs] External etcd mode: Skipping etcd/server certificate authority generation
[certs] External etcd mode: Skipping etcd/peer certificate authority generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0304 14:52:33.267470    1391 certs.go:113] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0304 14:52:33.995630    1391 certs.go:72] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0304 14:52:34.708619    1391 kubeconfig.go:92] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0304 14:52:35.249743    1391 kubeconfig.go:92] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0304 14:52:35.798270    1391 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0304 14:52:36.159920    1391 kubeconfig.go:92] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0304 14:52:36.689060    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.701499    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0304 14:52:36.701545    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.703214    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0304 14:52:36.703259    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.704327    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0304 14:52:36.704356    1391 etcd.go:97] [etcd] External etcd mode. Skipping the creation of a manifest for local etcd
I0304 14:52:36.704377    1391 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0304 14:52:36.705892    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0304 14:52:36.707216    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:36.711008    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 3 milliseconds
I0304 14:52:36.711030    1391 round_trippers.go:444] Response Headers:
I0304 14:52:36.711077    1391 request.go:779] Got a Retry-After 1s response for attempt 1 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:37.711365    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:37.715841    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 4 milliseconds
I0304 14:52:37.715880    1391 round_trippers.go:444] Response Headers:
I0304 14:52:37.715930    1391 request.go:779] Got a Retry-After 1s response for attempt 2 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:38.716182    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:38.717826    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:38.717850    1391 round_trippers.go:444] Response Headers:
I0304 14:52:38.717897    1391 request.go:779] Got a Retry-After 1s response for attempt 3 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:39.718135    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:39.719946    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:39.719972    1391 round_trippers.go:444] Response Headers:
I0304 14:52:39.720022    1391 request.go:779] Got a Retry-After 1s response for attempt 4 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:40.720273    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:40.722069    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:40.722093    1391 round_trippers.go:444] Response Headers:
I0304 14:52:40.722136    1391 request.go:779] Got a Retry-After 1s response for attempt 5 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:41.722440    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:41.724033    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:41.724058    1391 round_trippers.go:444] Response Headers:
I0304 14:52:41.724103    1391 request.go:779] Got a Retry-After 1s response for attempt 6 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:42.724350    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:52.725613    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 10001 milliseconds
I0304 14:52:52.725683    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.226097    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.720051    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 493 milliseconds
I0304 14:52:53.720090    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.720103    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:53.720115    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:53.720125    1391 round_trippers.go:447]     Content-Length: 879
I0304 14:52:53.720135    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.720197    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
healthz check failed
I0304 14:52:53.726022    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.739616    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 13 milliseconds
I0304 14:52:53.739690    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.739705    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:53.739717    1391 round_trippers.go:447]     Content-Length: 858
I0304 14:52:53.740058    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.740083    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:53.740342    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.226068    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.232126    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 6 milliseconds
I0304 14:52:54.232149    1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.232161    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:54.232172    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:54.232182    1391 round_trippers.go:447]     Content-Length: 816
I0304 14:52:54.232192    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.232234    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.726154    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.734050    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 7 milliseconds
I0304 14:52:54.734091    1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.734111    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.734129    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:54.734146    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:54.734163    1391 round_trippers.go:447]     Content-Length: 774
I0304 14:52:54.734250    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.226158    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.231693    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 5 milliseconds
I0304 14:52:55.231734    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.231754    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.231772    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:55.231789    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:55.231805    1391 round_trippers.go:447]     Content-Length: 774
I0304 14:52:55.231998    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.726404    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.733705    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 200 OK in 7 milliseconds
I0304 14:52:55.733746    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.733766    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:55.733792    1391 round_trippers.go:447]     Content-Length: 2
I0304 14:52:55.733809    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.733888    1391 request.go:942] Response Body: ok
[apiclient] All control plane components are healthy after 19.026898 seconds
I0304 14:52:55.736342    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:55.738400    1391 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0304 14:52:55.741686    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.751480    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.751978    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.752324    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.752367    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.752586    1391 round_trippers.go:447]     Content-Length: 1423
I0304 14:52:55.752696    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.756813    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.757443    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Content-Type: application/json" -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:55.913083    1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 155 milliseconds
I0304 14:52:55.913243    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.913271    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.913290    1391 round_trippers.go:447]     Content-Length: 218
I0304 14:52:55.913335    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.913438    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm-config","kind":"configmaps"},"code":409}
I0304 14:52:55.914863    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.915123    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.923538    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds
I0304 14:52:55.924120    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.924437    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.924810    1391 round_trippers.go:447]     Content-Length: 1423
I0304 14:52:55.925107    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.925521    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.926346    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.926823    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:55.946643    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 19 milliseconds
I0304 14:52:55.947026    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.947441    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.947798    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.948105    1391 round_trippers.go:447]     Content-Length: 298
I0304 14:52:55.948447    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"kubeadm:nodes-kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:55.949132    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.949653    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.960370    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config 200 OK in 10 milliseconds
I0304 14:52:55.960920    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.961216    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.961507    1391 round_trippers.go:447]     Content-Length: 464
I0304 14:52:55.961789    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.962002    1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Anodes-kubeadm-config","uid":"51a356c9-3e69-11e9-8dd7-0050569c544c","resourceVersion":"559","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.964418    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.965022    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:55.983782    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 18 milliseconds
I0304 14:52:55.983847    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.983890    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.983920    1391 round_trippers.go:447]     Content-Length: 312
I0304 14:52:55.983948    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.984007    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io \"kubeadm:nodes-kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:55.984330    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.984464    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.994138    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.994193    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.994497    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.994878    1391 round_trippers.go:447]     Content-Length: 678
I0304 14:52:55.995094    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.995377    1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Anodes-kubeadm-config","uid":"51a61bf8-3e69-11e9-8dd7-0050569c544c","resourceVersion":"560","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:56.001421    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:56.002891    1391 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
I0304 14:52:56.005261    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.005580    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:56.026664    1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 20 milliseconds
I0304 14:52:56.026763    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.026798    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.026852    1391 round_trippers.go:447]     Content-Length: 228
I0304 14:52:56.026931    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.027084    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubelet-config-1.13","kind":"configmaps"},"code":409}
I0304 14:52:56.027551    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.027830    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13'
I0304 14:52:56.036853    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 8 milliseconds
I0304 14:52:56.036900    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.037253    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.037291    1391 round_trippers.go:447]     Content-Length: 2133
I0304 14:52:56.037554    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.037755    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13","uid":"51a9de57-3e69-11e9-8dd7-0050569c544c","resourceVersion":"561","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.038255    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.038523    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:56.052414    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 13 milliseconds
I0304 14:52:56.052512    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.052572    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.052603    1391 round_trippers.go:447]     Content-Length: 296
I0304 14:52:56.052685    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.052955    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"kubeadm:kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:56.053398    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.053646    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13'
I0304 14:52:56.061599    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13 200 OK in 7 milliseconds
I0304 14:52:56.061691    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.061723    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.061779    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.061808    1391 round_trippers.go:447]     Content-Length: 467
I0304 14:52:56.061917    1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Akubelet-config-1.13","uid":"51abee39-3e69-11e9-8dd7-0050569c544c","resourceVersion":"562","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.062370    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.062564    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:56.076620    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 13 milliseconds
I0304 14:52:56.076664    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.076902    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.076938    1391 round_trippers.go:447]     Content-Length: 310
I0304 14:52:56.077092    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.077299    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io \"kubeadm:kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:56.077657    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.077940    1391 round_trippers.go:419] curl -k -v -XPUT  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" -H "Content-Type: application/json" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13'
I0304 14:52:56.084893    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13 200 OK in 6 milliseconds
I0304 14:52:56.084937    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.085395    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.085635    1391 round_trippers.go:447]     Content-Length: 675
I0304 14:52:56.085675    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.086357    1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Akubelet-config-1.13","uid":"51ad932c-3e69-11e9-8dd7-0050569c544c","resourceVersion":"563","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.086694    1391 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-c3-m1" as an annotation
[excluded similar lines] ...
I0304 14:53:16.587525    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:53:16.597510    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 9 milliseconds
I0304 14:53:16.597872    1391 round_trippers.go:444] Response Headers:
I0304 14:53:16.597909    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:53:16.598117    1391 round_trippers.go:447]     Content-Length: 188
I0304 14:53:16.598141    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:53:16 GMT
I0304 14:53:16.598332    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
[kubelet-check] Initial timeout of 40s passed.
[excluded similar lines] ...
I0304 14:53:17.111508    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
I0304 14:54:56.095649    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:54:56.101815    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 6 milliseconds
I0304 14:54:56.101895    1391 round_trippers.go:444] Response Headers:
I0304 14:54:56.101926    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:54:56.101945    1391 round_trippers.go:447]     Content-Length: 188
I0304 14:54:56.101996    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:54:56 GMT
I0304 14:54:56.102074    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

What you expected to happen?

Master to initialize without issue.

How to reproduce it (as minimally and precisely as possible)?

node preperation

# note! - docker needs to be installed on all nodes (it is on my my 16.04 template VMs)

# install misc tools
apt-get update && apt-get install -y apt-transport-https curl

# install required k8s tools
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# turn off swap
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

# create systemd config for kubelet
cat << _EOF_ > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Restart=always
_EOF_

# reload systemd and start kublet
systemctl daemon-reload
systemctl restart kubelet

config creation for kublet and etcd

### on all etcd nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')

# cerate clusterConfig for etcd
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "${ETCDINFO[$HOSTNAME]}"
        peerCertSANs:
        - "${ETCDINFO[$HOSTNAME]}"
        extraArgs:
            initial-cluster: ${ETCDNAMES[0]}=https://${ETCDIPS[0]}:2380,${ETCDNAMES[1]}=https://${ETCDIPS[1]}:2380,${ETCDNAMES[2]}=https://${ETCDIPS[2]}:2380
            initial-cluster-state: new
            name: ${HOSTNAME}
            listen-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
            listen-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
            advertise-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
            initial-advertise-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
EOF

generate and distribute certs

### run only on one etcd node (k8s-c3-e1)
# generate the main certificate authority (creates two files in /etc/kubernetes/pki/etcd/)
kubeadm init phase certs etcd-ca

# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml 
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml

# copy cert files from k8s-c3-e1 to the other etcd nodes
scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[1]}: && \
ssh -t ubuntu@${ETCDIPS[1]} "sudo mv pki /etc/kubernetes/ && \
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[2]}: && \
ssh -t ubuntu@${ETCDIPS[2]} "sudo mv pki /etc/kubernetes/ && \
sudo chown -R root.root /etc/kubernetes/pki"

# copy cert files from k8s-c3-e1 to the master nodes
scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[0]}: && \
ssh -t ubuntu@${MASTERIPS[0]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[1]}: && \
ssh -t ubuntu@${MASTERIPS[1]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[2]}: && \
ssh -t ubuntu@${MASTERIPS[2]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"

### run on the other etcd nodes
# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml 
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml

# create manifest for 
kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml


### run only on one etcd node (k8s-c3-e1)
# check if cluster is running
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${ETCDIPS[0]}:2379 cluster-health

config and init master nodes

### run on all master nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
VIP=10.10.10.76

# cerate clusterConfig for master nodes
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "127.0.0.1"
  - "${ETCDIPS[0]}"
  - "${ETCDIPS[1]}"
  - "${ETCDIPS[2]}"
  - "${VIP}"
controlPlaneEndpoint: "${VIP}:6443"
etcd:
    external:
        endpoints:
        - https://${ETCDIPS[0]}:2379
        - https://${ETCDIPS[1]}:2379
        - https://${ETCDIPS[2]}:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
EOF

### run only on the first master node (k8s-c3-m1)
# init the first master node
service kubelet stop && \
kubeadm init --config /root/kubeadmcfg.yaml

###Anything else we need to know?

cluster nodes

k8s-c3-lb - 10.10.10.76
k8s-c3-e1 - 10.10.10.90
k8s-c3-e2 - 10.10.10.91
k8s-c3-e3 - 10.10.10.92
k8s-c3-m1 - 10.10.10.93
k8s-c3-m2 - 10.10.10.94
k8s-c3-m3 - 10.10.10.95
k8s-c3-w1 - 10.10.10.96
k8s-c3-w2 - 10.10.10.97
k8s-c3-w3 - 10.10.10.98

nginx LB config

root@k8s-c3-lb:~# cat nginx.conf
worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

error_log /var/log/nginx/error.log info;

stream {
  upstream k8s-c3 {
    server 10.10.10.93:6443;
    server 10.10.10.94:6443;
    server 10.10.10.95:6443;
  }
  server {
    listen 6443;
    proxy_pass k8s-c3;
  }
}

netcat ouput to check lb

root@k8s-c3-lb:~# nc -v 10.10.10.76 6443
Connection to 10.10.10.76 6443 port [tcp/*] succeeded!

kubeadm config on etcd nodes

root@k8s-c3-e1:~# cat kubeadmcfg.yaml 
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "10.10.10.90"
        peerCertSANs:
        - "10.10.10.90"
        extraArgs:
            initial-cluster: k8s-c3-e1=https://10.10.10.90:2380,k8s-c3-e2=https://10.10.10.91:2380,k8s-c3-e3=https://10.10.10.92:2380
            initial-cluster-state: new
            name: k8s-c3-e1
            listen-peer-urls: https://10.10.10.90:2380
            listen-client-urls: https://10.10.10.90:2379
            advertise-client-urls: https://10.10.10.90:2379
            initial-advertise-peer-urls: https://10.10.10.90:2380

etcd check from master

root@k8s-c3-m1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.10.10.90:2379 cluster-health
member 2855b88ffd64a219 is healthy: got healthy result from https://10.10.10.91:2379
member 54861c1657ba1b20 is healthy: got healthy result from https://10.10.10.92:2379
member 6fc6fbb1e152a287 is healthy: got healthy result from https://10.10.10.90:2379
cluster is healthy

kubeadm config on master

root@k8s-c3-m1:~# cat /root/kubeadmcfg.yaml 
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "127.0.0.1"
  - "10.10.10.90"
  - "10.10.10.91"
  - "10.10.10.92"
  - "10.10.10.76"
controlPlaneEndpoint: "10.10.10.76:6443"
etcd:
    external:
        endpoints:
        - https://10.10.10.90:2379
        - https://10.10.10.91:2379
        - https://10.10.10.92:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

docker ps

root@k8s-c3-m1:~# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
d6e9af6f2585        dd862b749309           "kube-scheduler --ad…"   28 minutes ago      Up 28 minutes                           k8s_kube-scheduler_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_1
76bcca06bb0c        40a817357014           "kube-controller-man…"   28 minutes ago      Up 28 minutes                           k8s_kube-controller-manager_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_1
74c9b34ec00d        fc3801f0fc54           "kube-apiserver --au…"   About an hour ago   Up About an hour                        k8s_kube-apiserver_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0
e68bbbc0967e        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
0d6e0d0040cf        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_0
29f7974ae280        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0

journalctl -xeu kubelet

root@k8s-c3-m1:~# journalctl -xeu kubelet
Mar 04 15:44:05 k8s-c3-m1 kubelet[1512]: I0304 15:44:05.502619    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:15 k8s-c3-m1 kubelet[1512]: I0304 15:44:15.582590    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:24 k8s-c3-m1 kubelet[1512]: I0304 15:44:24.567123    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:25 k8s-c3-m1 kubelet[1512]: I0304 15:44:25.622999    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:35 k8s-c3-m1 kubelet[1512]: I0304 15:44:35.669595    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:45 k8s-c3-m1 kubelet[1512]: I0304 15:44:45.742763    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:49 k8s-c3-m1 kubelet[1512]: I0304 15:44:49.566491    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:55 k8s-c3-m1 kubelet[1512]: I0304 15:44:55.812636    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:58 k8s-c3-m1 kubelet[1512]: I0304 15:44:58.566265    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:05 k8s-c3-m1 kubelet[1512]: I0304 15:45:05.890388    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:15 k8s-c3-m1 kubelet[1512]: I0304 15:45:15.971426    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:26 k8s-c3-m1 kubelet[1512]: I0304 15:45:26.043344    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.117636    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.566338    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:46 k8s-c3-m1 kubelet[1512]: I0304 15:45:46.190995    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:51 k8s-c3-m1 kubelet[1512]: I0304 15:45:51.566093    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:56 k8s-c3-m1 kubelet[1512]: I0304 15:45:56.273010    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:06 k8s-c3-m1 kubelet[1512]: I0304 15:46:06.346175    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:16 k8s-c3-m1 kubelet[1512]: I0304 15:46:16.384087    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach

systemctl

root@k8s-c3-m1:~# systemctl status kubelet          
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 20-etcd-service-manager.conf
   Active: active (running) since Mon 2019-03-04 14:52:31 GMT; 55min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 1512 (kubelet)
    Tasks: 17
   Memory: 42.1M
      CPU: 2min 45.226s
   CGroup: /system.slice/kubelet.service
           └─1512 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true

Mar 04 15:46:26 k8s-c3-m1 kubelet[1512]: I0304 15:46:26.450692    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:27 k8s-c3-m1 kubelet[1512]: I0304 15:46:27.566498    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:36 k8s-c3-m1 kubelet[1512]: I0304 15:46:36.519582    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:46 k8s-c3-m1 kubelet[1512]: I0304 15:46:46.621611    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.566111    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.568601    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.706182    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:06 k8s-c3-m1 kubelet[1512]: I0304 15:47:06.778864    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:16 k8s-c3-m1 kubelet[1512]: I0304 15:47:16.852441    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:26 k8s-c3-m1 kubelet[1512]: I0304 15:47:26.893380    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
@neolit123 neolit123 added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. area/HA labels Mar 7, 2019
@timothysc timothysc added this to the v1.15 milestone Mar 11, 2019
@hreidar
Copy link
Author

hreidar commented Mar 12, 2019

Hi, @timothysc, I see that you added this to a milestone for v1.15, am I hitting a known problem?

@neolit123
Copy link
Member

neolit123 commented Mar 12, 2019

@hreidar
the milestone change means that the team is out of bandwidth to work on this ticket this cycle (release 1.14 is soon).

i don't believe we have gotten a similar report about external ETCD HA clusters not being able to write CRI socket. also, i can't seem to find a problem with your setup.

what happens if you try to kubeadm init --ignore-preflight-errors=all on one of your nodes (i.e. run single control plane setup)? do you get the same error?

root@k8s-c3-m1:~# journalctl -xeu kubelet

anything else that is interesting in the kubelet logs?

@hreidar
Copy link
Author

hreidar commented Mar 12, 2019

@neolit123 thanks for the info.

I agree. I don't see anything wrong and after the error I'm able to query the kubernetes API and I can see k8s data in my etcd cluster.

I'll try to make some more tests tomorrow. I will report if I find something interesting.

@ghost
Copy link

ghost commented Mar 17, 2019

Can you share the logs of your kube-apiserver container @hreidar

@fabriziopandini
Copy link
Member

If I'm not worng, the drop in 20-etcd-service-manager.conf should not exists on control-plane nodes

Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 20-etcd-service-manager.conf

@hreidar
Copy link
Author

hreidar commented Mar 17, 2019

@codeaholic I can but not until Tuesday as I'm at home with my two girls that are ill at the moment.

@fabriziopandini there is a possibility that I have generated this file by accident as I was preparing the nodes all at once. Does this influence the other k8s components in some way? etcd should not installed on the master in my case.

@hreidar
Copy link
Author

hreidar commented Mar 20, 2019

Hi, @codeaholic, @fabriziopandini, I had to redo my test cluster as my setup was fubar when I got to work on Tuesday. This time around I got things to work it seems.

hreidar@k8s-c3-m1:~# sudo kubeadm init --config /root/kubeadmcfg.yaml
...
...
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.10.76:6443 --token z9bf7c.8tl3crfvd6onov70 --discovery-token-ca-cert-hash sha256:863eb2eab4e4623f5751306f181e00e8fe4c01da00cc8161cdee9f35b0ed944c

hreidar@k8s-c3-m1:~#

The only thing I did differently this time is I ran kubeadm commands using sudo instead of root. Is this something that is known to screw things up or .... ?

If so it should probably be documented somewhere.
Thanks for your help, I will close this issue.

@hreidar hreidar closed this as completed Mar 20, 2019
@stgleb
Copy link

stgleb commented Apr 18, 2019

Got the same issue on my local setup

@ylroki
Copy link

ylroki commented May 16, 2019

If I'm not worng, the drop in 20-etcd-service-manager.conf should not exists on control-plane nodes

Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 20-etcd-service-manager.conf

drop 20-etcd-service-manager.conf, it works!

@parkjunhyo
Copy link

I solved this issue.

docker info | grep -i cgroup
---> It can be "systemd"

/var/lib/kubelet/config.yaml
---> cgroupDriver: cgroupfs

Because of this, I can run "kubelet". So I tried to match for this value.

In /etc/systemd/system/kubelet.service.d/
10-kubeadm.conf
20-etcd-service-manager.conf (if you want to use external etcd, this file is located in there)

In 10-kubeadm.conf
Any added option such as "--cgroup-driver=systemd" will not work. You have add in config.yaml for this.

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

In 20-etcd-service-manager.conf
I can add "--cgroup-driver=systemd" like below.
ExecStart=/usr/bin/kubelet --address=0.0.0.0 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cgroup-driver=systemd

after then, restart "kubeadm" and "kubelet". it will work.

@yelongyu
Copy link

Got the same issue on my external setup:

I0618 20:52:58.265837   31415 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://k8s-master:60443/api/v1/nodes/10-10-40-93'
I0618 20:52:58.268370   31415 round_trippers.go:438] GET https://k8s-master:60443/api/v1/nodes/10-10-40-93 404 Not Found in 2 milliseconds
I0618 20:52:58.268389   31415 round_trippers.go:444] Response Headers:
I0618 20:52:58.268396   31415 round_trippers.go:447]     Content-Type: application/json
I0618 20:52:58.268402   31415 round_trippers.go:447]     Content-Length: 192
I0618 20:52:58.268409   31415 round_trippers.go:447]     Date: Tue, 18 Jun 2019 12:52:58 GMT
I0618 20:52:58.268430   31415 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"10-10-40-93\" not found","reason":"NotFound","details":{"name":"10-10-40-93","kind":"nodes"},"code":404}
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

@jessehu
Copy link

jessehu commented Aug 4, 2019

I use kubeadm v1.15.0 on Ubuntu 16.04.2 LTS and same issue when running sudo kubeadm init --pod-network-cidr=10.244.0.0/16. kubeadm v1.15.1 works perfect.

@adamkdean
Copy link

I got this when I had ran init, then manually cleared down the machine, and then ran it again, while testing etc. Running sudo kubectl reset followed by re-init, sudo kubeadm init --pod-network-cidr=10.244.0.0/16 etc, worked fine.

@Dusan-Dingarac
Copy link

sudo kubectl reset

Right command is sudo kubeadm reset, and follow its output at least to delete /etc/cni/net.d and $HOME/.kube/config.

@thulasikumar729
Copy link

[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

@thulasikumar729
Copy link

How to resloved this issue

@chrisliu1995
Copy link

How to resloved this issue

too few information

@neolit123
Copy link
Member

try the support channels:
https://github.com/kubernetes/kubernetes/blob/master/SUPPORT.md

if a client cannot write information to a Node object (such as the CRI socket information) due to timeout, then something is not OK with the api-server in the cluster.

@vupv
Copy link

vupv commented Nov 25, 2021

I have the same error but I don't know what wrong with my steps
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
init.log

I plan to setup with below boxs
master01 --> 192.168.1.85
master02 --> 192.168.1.86
haproxy01, keepalived --> 192.168.1.87 , VIP --> 192.168.1.88
worker01 --> 192.168.1.90
worker02 --> 192.168.1.91

Install docker, kubelet kubeadm kubectl on all cluster node

HAProxy setup
root@haproxy01:~# cat /etc/haproxy/haproxy.cfg
global
...
defaults
...
frontend k8s_frontend
bind 192.168.1.88:6443
option tcplog
mode tcp
default_backend k8s_backend

backend k8s_backend
mode tcp
balance roundrobin
option tcp-check
server master01 192.168.1.85:6443 check fall 3 rise 2
server master02 192.168.1.86:6443 check fall 3 rise 2

Keepalived setup

root@haproxy01:~# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy { # Requires keepalived-1.1.13
script "killall -0 haproxy" # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface enp0s3
state MASTER
virtual_router_id 51
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.1.88 brd 192.168.1.255 dev enp0s3 label enp0s3:1
}
track_script {
chk_haproxy
}
}

Generating the TLS certificates

$ vim ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}

$ vim ca-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "CA",
"ST": "Cork Co."
}
]
}

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ vim kubernetes-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Cork Co."
}
]
}

cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=192.168.1.85,192.168.1.86,192.168.1.87,192.168.1.88,192.168.1.89,127.0.0.1,kubernetes.default
-profile=kubernetes kubernetes-csr.json |
cfssljson -bare kubernetes

scp ca.pem kubernetes.pem kubernetes-key.pem [email protected]:/etc/etcd/
scp ca.pem kubernetes.pem kubernetes-key.pem [email protected]:/etc/etcd/

Etcd on two masters setup
root@master01:~# cat /etc/systemd/system/etcd.service

[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd
--name 192.168.1.85
--cert-file=/etc/etcd/kubernetes.pem
--key-file=/etc/etcd/kubernetes-key.pem
--peer-cert-file=/etc/etcd/kubernetes.pem
--peer-key-file=/etc/etcd/kubernetes-key.pem
--trusted-ca-file=/etc/etcd/ca.pem
--peer-trusted-ca-file=/etc/etcd/ca.pem
--peer-client-cert-auth
--client-cert-auth
--initial-advertise-peer-urls https://192.168.1.85:2380
--listen-peer-urls https://192.168.1.85:2380
--listen-client-urls https://192.168.1.85:2379,http://127.0.0.1:2379
--advertise-client-urls https://192.168.1.85:2379
--initial-cluster-token etcd-cluster-1
--initial-cluster 192.168.1.85=https://192.168.1.85:2380,192.168.1.86=https://192.168.1.86:2380
--initial-cluster-state new
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

root@master02:~# cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd
--name 192.168.1.86
--cert-file=/etc/etcd/kubernetes.pem
--key-file=/etc/etcd/kubernetes-key.pem
--peer-cert-file=/etc/etcd/kubernetes.pem
--peer-key-file=/etc/etcd/kubernetes-key.pem
--trusted-ca-file=/etc/etcd/ca.pem
--peer-trusted-ca-file=/etc/etcd/ca.pem
--peer-client-cert-auth
--client-cert-auth
--initial-advertise-peer-urls https://192.168.1.86:2380
--listen-peer-urls https://192.168.1.86:2380
--listen-client-urls https://192.168.1.86:2379,http://127.0.0.1:2379
--advertise-client-urls https://192.168.1.86:2379
--initial-cluster-token etcd-cluster-0
--initial-cluster 192.168.1.85=https://192.168.1.85:2380,192.168.1.86=https://192.168.1.86:2380
--initial-cluster-state new
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

root@master01:~# ETCDCTL_API=3 etcdctl member list
8a137d8e3cb900a, started, 192.168.1.86, https://192.168.1.86:2380, https://192.168.1.86:2379, false
75b9a29ae5a417ae, started, 192.168.1.85, https://192.168.1.85:2380, https://192.168.1.85:2379, false

Prepare cluster config file
root@master01:~# cat cluster.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
etcd:
external:
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
endpoints:
- https://192.168.1.85:2379
- https://192.168.1.86:2379
networking:
dnsDomain: cluster.local
podSubnet: 10.30.0.0/24
serviceSubnet: 10.96.0.0/12
kubernetesVersion: v1.22.4
controlPlaneEndpoint: 192.168.1.88:6443
apiServer:
timeoutForControlPlane: 4m0s
extraArgs:
authorization-mode: "RBAC"
certSANs:
- "127.0.0.1"
- "192.168.1.85"
- "192.168.1.86"
- "192.168.1.88"
- ".fe.me"
controllerManager: {}
scheduler: {}
certificatesDir: /etc/kubernetes/pki
imageRepository: k8s.gcr.io
clusterName: kubernetes
dns: {}

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
root@master01:~#

root@master01:~# kubeadm init --config=cluster.yaml

@neolit123
Copy link
Member

neolit123 commented Nov 25, 2021 via email

@vupv
Copy link

vupv commented Nov 25, 2021

The load balancer forwards the request to kube-apiserver in two masters
192.168.1.85:6443
192.168.1.86:6443

How can I verify the kube-apiserver properly or not

@khaledakrt
Copy link

i tried all solutions but i have the same error everytime ,
==> [kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: nodes "kubernetes-master" not found
To see the stack trace of this error execute with --v=5 or higher

Any help

@pacoxu
Copy link
Member

pacoxu commented May 18, 2023

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: nodes "kubernetes-master" not found
To see the stack trace of this error execute with --v=5 or higher

check kubelet log may help.

BTW, this seems to be a support case from you and we suggest to use slack or StackOverflow to do that.
The node NodeFound error is mainly caused by kubelet not successfully registered on the apiserver.(That is why you should check kubelet log for the root cause.)

@cyford
Copy link

cyford commented Aug 13, 2023

I believe my reason was becuase i used the --node-name $NODENAME which was adding my host name and and my hostname was uppercase which i think it may of not supported.
removing --node-name switch worked

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/HA help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests