Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

settingup_kubernetes_cluster.yml bails out on errors in preflight check #2

Open
tferic opened this issue Sep 1, 2019 · 1 comment

Comments

@tferic
Copy link

tferic commented Sep 1, 2019

When running ansible-playbook settingup_kubernetes_cluster.yml, the step "TASK [Initializing Kubernetes cluster]" exits with a fatal error.
I am running the ansible-playbook on the maser node on CentOS 7.

TASK [Initializing Kubernetes cluster] ************************************************************************************************************************ fatal: [docker01.feric.ch]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address 192.168.100.191 --pod-network-cidr=172.16.0.0/16", "delta": "0:00:00.521513", "end": "2019-09-01 22:26:07.050637", "msg": "non-zero return code", "rc": 1, "start": "2019-09-01 22:26:06.529124", "stderr": "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR Port-6443]: Port 6443 is in use\n\t[ERROR Port-10251]: Port 10251 is in use\n\t[ERROR Port-10252]: Port 10252 is in use\n\t[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists\n\t[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists\n\t[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists\n\t[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists\n\t[ERROR Port-10250]: Port 10250 is in use\n\t[ERROR Port-2379]: Port 2379 is in use\n\t[ERROR Port-2380]: Port 2380 is in use\n\t[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty\n[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...", "stderr_lines": ["\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR Port-6443]: Port 6443 is in use", "\t[ERROR Port-10251]: Port 10251 is in use", "\t[ERROR Port-10252]: Port 10252 is in use", "\t[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists", "\t[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists", "\t[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists", "\t[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists", "\t[ERROR Port-10250]: Port 10250 is in use", "\t[ERROR Port-2379]: Port 2379 is in use", "\t[ERROR Port-2380]: Port 2380 is in use", "\t[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty", "[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.15.3\n[preflight] Running pre-flight checks", "stdout_lines": ["[init] Using Kubernetes version: v1.15.3", "[preflight] Running pre-flight checks"]}

PLAY RECAP ****************************************************************************************************************************************************
docker01.feric.ch : ok=12 changed=5 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
docker02.feric.ch : ok=10 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
docker03.feric.ch : ok=10 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
`

I then tried to run the command directly, without ansible:
kubeadm init --apiserver-advertise-address 192.168.100.191 --pod-network-cidr=172.16.0.0/16 [init] Using Kubernetes version: v1.15.3 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Port-10251]: Port 10251 is in use [ERROR Port-10252]: Port 10252 is in use [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR Port-10250]: Port 10250 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR Port-2380]: Port 2380 is in use [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with '--ignore-preflight-errors=...'

@learnitguide
Copy link
Owner

Ports already started in use, so when you try again on the same server, obviously you would get error, you must try on fresh server. Not only using ansible, even when you do manually, it will throw error. So if you get an error in the first time itself, it wont run further. so before installing, you must solve all your prerequisites for smooth installation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants