Multi-Kind leverages Vagrant and Kind (Kubernetes In Docker) to create multiple local kubernetes and kubeflow clusters inside the same host machine, see the following png for simple layout
As a machine gets more powerful, it is such a waste to have it running just one Kubernetes, especially for the applications which require only a local Kubernetes for practice. One example is our Kubeflow workshop. To fully utilize hardware resources, we leverage vagrant to construct a fully isolated environment and install required packages on it (e.g. Kubernetes and Kubeflow and more ...), map ports for kubeApi and ssh, and also export its kubeconfg to host. Therefore, users on the host machine can easily talk to the guest Kube-API via kubectl.
We expected the user are under:
- Windows environemnt with docker desktop installed
- Linux environment
and this tool provides abstractions for them to operate clusters.
Idealy, we could just use Kind which running as a container to provide resource isolation. However, Kind was unable to isolate resources from its underlying kubelet(see issue) due to kubelet's implementation. Thus, Vagrant is served as a resource isolation and provide clean guest enviornment.
NOTE: Vagrant is not battle-tested, so use it with your cautions.
Usage:
multikf add <machine-name> [flags]
Flags:
--cpus int number of cpus allocated to the guest machine (default 1)
--export_ports string export ports to host, delimited by comma(example: 8443:443 stands for mapping host port 8443 to container port 443)
--f force to create instance regardless the machine status
-h, --help help for add
--memoryg int number of memory in gigabytes allocated to the guest machine (default 1)
--use_gpus int use gpu resources (default: 0), possible value (0 or 1)
--with_ip string with a specific ip address for kubeapi (default: 0.0.0.0) (default "0.0.0.0")
--with_kubeflow install kubeflow modules (default: true) (default true)
--with_password string with a specific password for default user (default: 12341234) (default "12341234")
./multikf add test000 --cpus 1 --memoryg 1 --provisioner=vagrant
./multikf add test000 --cpus=1 --memoryg=1 --use_gpus=1 --provisioner=docker
./multikf add test000 --cpus=1 --memoryg=16 --with_password=helloworld --provisioner=docker
./multikf export test000 --kubeconfig_path /tmp/test000.kubeconfig
run kubectl from host
kubectl get pods --all-namespaces --kubeconfig=/tmp/test000.kubeconfig
./multikf list
+---------+------------------+---------+------+---------------+
| NAME | DIR | STATUS | CPUS | MEMORY |
+---------+------------------+---------+------+---------------+
| test000 | .vagrant/test000 | running | 1 | 70720/1000328 |
+---------+------------------+---------+------+---------------+
./multikf delete test000
./multikf connect kubeflow test000
Fields listed here is on our roadmap.
Fields | machine(Docker) | machine(Vagrant) |
---|---|---|
Cpu Isolation | O | O |
Memory Isolation | O | O |
GPU Isolation | O | X |
Expose KubeApi IP | O | O |
For passing gpu to docker container, one approach is to use --gpus=all
when you launched docker container like.
docker run -it --gpus=all ubuntu:21.10 /bin/bash
where it relies on the host's cuda driver. However, Kind are NOT supported this approach, see issue However, we use our home-crafted kind for this purpose.