This doc provides instructions about how to test Bring Your Own Host Provider on a local workstation using:
- Kind for provisioning a management cluster
- Docker run for creating hosts to be used as a capacity for BYO Host machines
- BYOH provider to add the above hosts to the aforemention workload cluster
- Tilt for faster iterative development
It is required to have a docker image to be used when doing docker run for creating hosts
Clone BYOH Repo
git clone [email protected]:vmware-tanzu/cluster-api-provider-bringyourownhost.git
We are using kind to create the Kubernetes cluster that will be turned into a Cluster API management cluster later in this doc.
cat > kind-cluster.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.22.0
EOF
kind create cluster --config kind-cluster.yaml
Installing cluster API into the Kubernetes cluster will turn it into a Cluster API management cluster.
We are going using tilt in order to do so, so you can have your local environment set up for rapid iterations, as described in Developing Cluster API with Tilt.
In order to do so you need to clone both https://github.com/kubernetes-sigs/cluster-api/ and https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost locally;
Clone CAPI Repo
git clone [email protected]:kubernetes-sigs/cluster-api.git
cd cluster-api
git checkout v1.0.0
Create a tilt-settings.json file
Next, create a tilt-settings.json file and place it in your local copy of cluster-api:
cat > tilt-settings.json <<EOF
{
"default_registry": "gcr.io/k8s-staging-cluster-api",
"enable_providers": ["byoh", "kubeadm-bootstrap", "kubeadm-control-plane"],
"provider_repos": ["../cluster-api-provider-bringyourownhost"]
}
EOF
Run Tilt
To launch your development environment, run below command and keep it running in the shell
tilt up
Wait for all the resources to come up, status can be viewed in Tilt UI.
Now that you have a management cluster with Cluster API and BYOHost provider installed, we can start to create a workload cluster.
Create Management Cluster kubeconfig
cp ~/.kube/config ~/.kube/management-cluster.conf
export KIND_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' kind-control-plane)
sed -i 's/ server\:.*/ server\: https\:\/\/'"$KIND_IP"'\:6443/g' ~/.kube/management-cluster.conf
Generate host-agent binaries
make host-agent-binaries
cd cluster-api-provider-bringyourownhost
make prepare-byoh-docker-host-image-dev
Run the following to create n hosts, where n>1
for i in {1..n}
do
echo "Creating docker container host $i"
docker run --detach --tty --hostname host$i --name host$i --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --network kind byoh/node:v1.22.3
echo "Copy agent binary to host $i"
docker cp bin/byoh-hostagent-linux-amd64 host$i:/byoh-hostagent
echo "Copy kubeconfig to host $i"
docker cp ~/.kube/management-cluster.conf host$i:/management-cluster.conf
done
Start the host agent on the host and keep it running
docker exec -it $HOST_NAME bin/bash
./byoh-hostagent --kubeconfig management-cluster.conf
Repeat the same steps with by changing the HOST_NAME
env variable for all the hosts that you created.
Check if the hosts registered itself into the management cluster.
Open another shell and run
kubectl get byohosts
Open a new shell and change directory to cluster-api-provider-bringyourownhost
repository. Run below commands
export CLUSTER_NAME="test1"
export NAMESPACE="default"
export KUBERNETES_VERSION="v1.22.3"
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
export CONTROL_PLANE_ENDPOINT_IP=<static IP from the subnet where the containers are running>
export BUNDLE_LOOKUP_TAG=<bundle tag>
From cluster-api-provider-bringyourownhost
folder
cat test/e2e/data/infrastructure-provider-byoh/v1beta1/cluster-template.yaml | envsubst | kubectl apply -f -
kubectl get machines
Dig into host when machine gets provisioned.
kubectl get kubeadmconfig
kubectl get BYOmachines
kubectl get BYOhost
Deploy a CNI solution
kubectl get secret $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' | base64 -d > $CLUSTER_NAME-kubeconfig
kubectl --kubeconfig $CLUSTER_NAME-kubeconfig apply -f test/e2e/data/cni/kindnet/kindnet.yaml
After a short while, our nodes should be running and in Ready state. Check the workload cluster
kubectl --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
Or peek at the host agent logs.
kubectl delete cluster $CLUSTER_NAME
docker rm -f $HOST_NAME
kind delete cluster
The installer is responsible for detecting the BYOH OS, downloading a BYOH bundle and installing/uninstalling it.
The current list of supported tuples of OS, kubernetes Version, BYOH Bundle Name can be retrieved with:
./cli --list-supported
An example output looks like:
OS | K8S Version | BYOH Bundle Name |
Ubuntu_20.04.*_x86-64 | v1.22.3 | byoh-bundle-ubuntu_20.04.1_x86-64_k8s_v1.22.3 |
As of writing this, the following packages must be pre-installed on the BYOH host:
- socat
- ebtables
- ethtool
- conntrack
sudo apt-get install socat ebtables ethtool conntrack
Optional. This step describes downloading kubernetes host components for Debian.
# Build docker image
(cd agent/installer/bundle_builder/ingredients/deb/ && docker build -t byoh-ingredients-deb .)
# Create a directory for the ingredients and download to it
(mkdir -p byoh-ingredients-download && docker run --rm -v `pwd`/byoh-ingredients-download:/ingredients byoh-ingredients-deb)
This step describes providing custom kubernetes host components. They can be copied to byoh-ingredients-download
. Files must match the following globs:
*containerd*.tar
*kubeadm*.deb
*kubelet*.deb
*kubectl*.deb
*cri-tools*.deb
*kubernetes-cni*.deb
#Build docker image
(cd agent/installer/bundle_builder/ && docker build -t byoh-build-push-bundle .)
# Build a BYOH bundle and publish it to an OCI-compliant repo
docker run --rm -v `pwd`/byoh-ingredients-download:/ingredients --env BUILD_ONLY=0 build-push-bundle <REPO>/<BYOH Bundle name>
The specified above BYOH Bundle name must match one of the Supported OS and kubernetes BYOH bundle names
# You can also build a tarball of the bundle without publishing. This will create a bundler.tar in the current directory and can be used for custom pushing
docker run --rm -v `pwd`/byoh-ingredients-download:/ingredients -v`pwd`:/bundle --env BUILD_ONLY=1 build-push-bundle
# Optionally, additional configuration can be included in the bundle by mounting a local path under /config of the container. It will be placed on top of any drop-in configuration created by the packages and tars in the bundle
docker run --rm -v `pwd`/byoh-ingredients-download:/ingredients -v`pwd`:/bundle -v`pwd`/agent/installer/bundle_builder/config/ubuntu/20_04/k8s/1_22 --env BUILD_ONLY=1 build-push-bundle
The installer CLI exposes the installer package as a command line tool. It can be built by running
go build ./agent/installer/cli
Once built, for a list of all commands, run
./cli --help
In the following examples, the os and k8s flags, must match one of the Supported OS and kubernetes BYOH bundle names
Examples:
# Will return if/how the current OS is detected
./cli --detect
# Will return the OS changes that installer will make during install and uninstall without actually doing them
./cli --preview-os-changes --os Ubuntu_20.04.*_x86-64 --k8s v1.22.3
# Will detect the current OS and install BYOH bundle with kubernetes v1.22.3 from the default repo
sudo ./cli --install --k8s v1.22.3
# Will override the OS detection and will use the specified repo
sudo ./cli --install --os Ubuntu_20.04.1_x86-64 --bundle-repo 10.26.226.219:5000/repo --k8s v1.22.3
# Will override the OS detection, use the specified repo and uninstall
sudo ./cli --uninstall --os Ubuntu_20.04.1_x86-64 --bundle-repo 10.26.226.219:5000/repo --k8s v1.22.3