forked from kubernetes-sigs/kubespray
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
updated vagrant doc (kubernetes-sigs#3719)
- Loading branch information
1 parent
c8d95a1
commit 5e80603
Showing
2 changed files
with
127 additions
and
52 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
--- | ||
apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
kind: ClusterRoleBinding | ||
metadata: | ||
name: kubernetes-dashboard | ||
labels: | ||
k8s-app: kubernetes-dashboard | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: | ||
- kind: ServiceAccount | ||
name: kubernetes-dashboard | ||
namespace: kube-system |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,69 +1,129 @@ | ||
Vagrant Install | ||
================= | ||
Introduction | ||
============ | ||
|
||
Assuming you have Vagrant (2.0+) installed with virtualbox (it may work | ||
with vmware, but is untested) you should be able to launch a 3 node | ||
Kubernetes cluster by simply running `$ vagrant up`.<br /> | ||
Assuming you have Vagrant 2.0+ installed with virtualbox, libvirt/qemu or vmware, but is untested) you should be able to launch a 3 node Kubernetes cluster by simply running `vagrant up`. This will spin up 3 VMs and install kubernetes on them. Once they are completed you can connect to any of them by running `vagrant ssh k8s-[1..3]`. | ||
|
||
This will spin up 3 VMs and install kubernetes on them. Once they are | ||
completed you can connect to any of them by running <br /> | ||
`$ vagrant ssh k8s-0[1..3]`. | ||
To give an estimate of the expected duration of a provisioning run: On a dual core i5-6300u laptop with an SSD, provisioning takes around 13 to 15 minutes, once the container images and other files are cached. Note that libvirt/qemu is recommended over virtualbox as it is quite a bit faster, especcially during boot-up time. | ||
|
||
``` | ||
$ vagrant up | ||
Bringing machine 'k8s-01' up with 'virtualbox' provider... | ||
Bringing machine 'k8s-02' up with 'virtualbox' provider... | ||
Bringing machine 'k8s-03' up with 'virtualbox' provider... | ||
==> k8s-01: Box 'bento/ubuntu-14.04' could not be found. Attempting to find and install... | ||
... | ||
... | ||
k8s-03: Running ansible-playbook... | ||
PLAY [k8s-cluster] ************************************************************* | ||
TASK [setup] ******************************************************************* | ||
ok: [k8s-03] | ||
ok: [k8s-01] | ||
ok: [k8s-02] | ||
... | ||
... | ||
PLAY RECAP ********************************************************************* | ||
k8s-01 : ok=157 changed=66 unreachable=0 failed=0 | ||
k8s-02 : ok=137 changed=59 unreachable=0 failed=0 | ||
k8s-03 : ok=86 changed=51 unreachable=0 failed=0 | ||
$ vagrant ssh k8s-01 | ||
vagrant@k8s-01:~$ kubectl get nodes | ||
NAME STATUS AGE | ||
k8s-01 Ready 45s | ||
k8s-02 Ready 45s | ||
k8s-03 Ready 45s | ||
``` | ||
For proper performance a mimimum of 12GB RAM is recommended. It is possible to run a 3 node cluster on a laptop with 8GB of RAM using the default Vagrantfile, provided you have 8GB zram swap configured and not much more than a browser and a mail client running. If you decide to run on such a machine, then also make sure that any tnpfs devices, that are mounted, are mostly empty and disable any swapfiles mounted on HDD/SSD or you will be in for some serious swap-madness. Things can get a bit sluggish during provisioning, but when that's done, the system will actually be able to perform quite well. | ||
|
||
Customize Vagrant | ||
================= | ||
|
||
You can override the default settings in the `Vagrantfile` either by directly modifying the `Vagrantfile` | ||
or through an override file. | ||
|
||
In the same directory as the `Vagrantfile`, create a folder called `vagrant` and create `config.rb` file in it. | ||
|
||
You're able to override the variables defined in `Vagrantfile` by providing the value in the `vagrant/config.rb` file, | ||
e.g.: | ||
|
||
echo '$forwarded_ports = {8001 => 8001}' >> vagrant/config.rb | ||
|
||
and after `vagrant up` or `vagrant reload`, your host will have port forwarding setup with the guest on port 8001. | ||
You can override the default settings in the `Vagrantfile` either by directly modifying the `Vagrantfile` or through an override file. In the same directory as the `Vagrantfile`, create a folder called `vagrant` and create `config.rb` file in it. An example of how to configure this file is given below. | ||
|
||
Use alternative OS for Vagrant | ||
============================== | ||
|
||
By default, Vagrant uses Ubuntu 16.04 box to provision a local cluster. You may use an alternative supported | ||
operating system for your local cluster. | ||
By default, Vagrant uses Ubuntu 18.04 box to provision a local cluster. You may use an alternative supported operating system for your local cluster. | ||
|
||
Customize `$os` variable in `Vagrantfile` or as override, e.g.,: | ||
|
||
echo '$os = "coreos-stable"' >> vagrant/config.rb | ||
|
||
|
||
The supported operating systems for vagrant are defined in the `SUPPORTED_OS` constant in the `Vagrantfile`. | ||
|
||
File and image caching | ||
====================== | ||
|
||
Kubespray can take quit a while to start on a laptop. To improve provisioning speed, the variable 'download_run_once' is set. This will make kubespray download all files and containers just once and then redistributes them to the other nodes and as a bonus, also cache all downloads locally and re-use them on the next provisioning run. For more information on download settings see [download documentation](docs/downloads.md). | ||
|
||
Example use of Vagrant | ||
====================== | ||
|
||
The following is an example of setting up and running kubespray using `vagrant`. For repeated runs, you could save the script to a file in the root of the kubespray and run it by executing 'source <name_of_the_file>. | ||
|
||
``` | ||
# use virtualenv to install all python requirements | ||
VENVDIR=venv | ||
virtualenv --python=/usr/bin/python3.7 $VENVDIR | ||
source $VENVDIR/bin/activate | ||
pip install -r requirements.txt | ||
# prepare an inventory to test with | ||
INV=inventory/my_lab | ||
rm -rf ${INV}.bak &> /dev/null | ||
mv ${INV} ${INV}.bak &> /dev/null | ||
cp -a inventory/sample ${INV} | ||
rm -f ${INV}/hosts.ini | ||
# customize the vagrant environment | ||
mkdir vagrant | ||
cat << EOF > vagrant/config.rb | ||
\$instance_name_prefix = "kub" | ||
\$vm_cpus = 1 | ||
\$num_instances = 3 | ||
\$os = "centos-bento" | ||
\$subnet = "10.0.20" | ||
\$network_plugin = "flannel" | ||
\$inventory = "$INV" | ||
\$shared_folders = { 'temp/docker_rpms' => "/var/cache/yum/x86_64/7/docker-ce/packages" } | ||
EOF | ||
# make the rpm cache | ||
mkdir -p temp/docker_rpms | ||
vagrant up | ||
# make a copy of the downloaded docker rpm, to speed up the next provisioning run | ||
scp kub-1:/var/cache/yum/x86_64/7/docker-ce/packages/* temp/docker_rpms/ | ||
# copy kubectl access configuration in place | ||
mkdir $HOME/.kube/ &> /dev/null | ||
ln -s $INV/artifacts/admin.conf $HOME/.kube/config | ||
# make the kubectl binary available | ||
sudo ln -s $INV/artifacts/kubectl /usr/local/bin/kubectl | ||
#or | ||
export PATH=$PATH:$INV/artifacts | ||
``` | ||
If a vagrant run failed and you've made some changes to fix the issue causing the fail, here is how you would re-run ansible: | ||
``` | ||
ansible-playbook -vvv -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory cluster.yml | ||
``` | ||
If all went well, you check if it's all working as expected: | ||
``` | ||
kubectl get nodes | ||
``` | ||
The output should look like this: | ||
``` | ||
$ kubectl get nodes | ||
NAME STATUS ROLES AGE VERSION | ||
kub-1 Ready master 32m v1.14.1 | ||
kub-2 Ready master 31m v1.14.1 | ||
kub-3 Ready <none> 31m v1.14.1 | ||
``` | ||
Another nice test is the following: | ||
``` | ||
kubectl get po --all-namespaces -o wide | ||
``` | ||
Which should yield something like the following: | ||
``` | ||
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | ||
kube-system coredns-97c4b444f-9wm86 1/1 Running 0 31m 10.233.66.2 kub-3 <none> <none> | ||
kube-system coredns-97c4b444f-g7hqx 0/1 Pending 0 30m <none> <none> <none> <none> | ||
kube-system dns-autoscaler-5fc5fdbf6-5c48k 1/1 Running 0 31m 10.233.66.3 kub-3 <none> <none> | ||
kube-system kube-apiserver-kub-1 1/1 Running 0 32m 10.0.20.101 kub-1 <none> <none> | ||
kube-system kube-apiserver-kub-2 1/1 Running 0 32m 10.0.20.102 kub-2 <none> <none> | ||
kube-system kube-controller-manager-kub-1 1/1 Running 0 32m 10.0.20.101 kub-1 <none> <none> | ||
kube-system kube-controller-manager-kub-2 1/1 Running 0 32m 10.0.20.102 kub-2 <none> <none> | ||
kube-system kube-flannel-8tgcn 2/2 Running 0 31m 10.0.20.103 kub-3 <none> <none> | ||
kube-system kube-flannel-b2hgt 2/2 Running 0 31m 10.0.20.101 kub-1 <none> <none> | ||
kube-system kube-flannel-zx4bc 2/2 Running 0 31m 10.0.20.102 kub-2 <none> <none> | ||
kube-system kube-proxy-4bjdn 1/1 Running 0 31m 10.0.20.102 kub-2 <none> <none> | ||
kube-system kube-proxy-l5tt5 1/1 Running 0 31m 10.0.20.103 kub-3 <none> <none> | ||
kube-system kube-proxy-x59q8 1/1 Running 0 31m 10.0.20.101 kub-1 <none> <none> | ||
kube-system kube-scheduler-kub-1 1/1 Running 0 32m 10.0.20.101 kub-1 <none> <none> | ||
kube-system kube-scheduler-kub-2 1/1 Running 0 32m 10.0.20.102 kub-2 <none> <none> | ||
kube-system kubernetes-dashboard-6c7466966c-jqz42 1/1 Running 0 31m 10.233.66.4 kub-3 <none> <none> | ||
kube-system nginx-proxy-kub-3 1/1 Running 0 32m 10.0.20.103 kub-3 <none> <none> | ||
kube-system nodelocaldns-2x7vh 1/1 Running 0 31m 10.0.20.102 kub-2 <none> <none> | ||
kube-system nodelocaldns-fpvnz 1/1 Running 0 31m 10.0.20.103 kub-3 <none> <none> | ||
kube-system nodelocaldns-h2f42 1/1 Running 0 31m 10.0.20.101 kub-1 <none> <none> | ||
``` | ||
Create clusteradmin rbac and get the login token for the dashboard: | ||
``` | ||
kubectl create -f contrib/misc/clusteradmin-rbac.yml | ||
kubectl -n kube-system describe secret kubernetes-dashboard-token | grep 'token:' | grep -o '[^ ]\+$' | ||
``` | ||
Copy it to the clipboard and now log in to the [dashboard](https://10.0.20.101:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login). | ||
|