-
Notifications
You must be signed in to change notification settings - Fork 143
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Signed-off-by: Manuel Buil <[email protected]>
- Loading branch information
1 parent
36147aa
commit 2a5cbfc
Showing
23 changed files
with
954 additions
and
189 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,17 @@ | ||
--- | ||
title: "Network Options" | ||
title: "Basic Network Options" | ||
weight: 25 | ||
--- | ||
|
||
This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6. | ||
|
||
> **Note:** Please reference the [Networking](../networking.md) page for information about CoreDNS, Traefik, and the Service LB. | ||
This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6 or dualStack. | ||
|
||
## Flannel Options | ||
|
||
[Flannel](https://github.com/flannel-io/flannel/blob/master/README.md) is a lightweight provider of layer 3 network fabric that implements the Kubernetes Container Network Interface (CNI). It is what is commonly referred to as a CNI Plugin. | ||
|
||
* Flannel options can only be set on server nodes, and must be identical on all servers in the cluster. | ||
* The default backend for Flannel is `vxlan`. To enable encryption, use the `wireguard-native` backend. | ||
* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](./requirements.md?os=pi#operating-systems). | ||
* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](../installation/requirements.md?os=pi#operating-systems). | ||
* Using `wireguard-native` as the Flannel backend may require additional modules on some Linux distributions. Please see the [WireGuard Install Guide](https://www.wireguard.com/install/) for details. | ||
The WireGuard install steps will ensure the appropriate kernel modules are installed for your operating system. | ||
You must ensure that WireGuard kernel modules are available on every node, both servers and agents, before attempting to use the WireGuard Flannel backend. | ||
|
@@ -183,85 +181,7 @@ Single-stack IPv6 clusters (clusters without IPv4) are supported on K3s using th | |
```bash | ||
--cluster-cidr=2001:cafe:42::/56 --service-cidr=2001:cafe:43::/112 | ||
``` | ||
## Nodes Without a Hostname | ||
|
||
## Distributed hybrid or multicloud cluster | ||
|
||
A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. | ||
|
||
:::warning | ||
The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. | ||
::: | ||
|
||
:::warning | ||
Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. | ||
::: | ||
|
||
### Embedded k3s multicloud solution | ||
|
||
K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. | ||
|
||
To enable this type of deployment, you must add the following parameters on servers: | ||
```bash | ||
--node-external-ip=<SERVER_EXTERNAL_IP> --flannel-backend=wireguard-native --flannel-external-ip | ||
``` | ||
and on agents: | ||
```bash | ||
--node-external-ip=<AGENT_EXTERNAL_IP> | ||
``` | ||
|
||
where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. | ||
|
||
Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. | ||
|
||
:::info Dynamic IPs | ||
If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: | ||
|
||
```bash | ||
systemctl daemon-reload | ||
systemctl restart k3s | ||
``` | ||
::: | ||
|
||
### Integration with the Tailscale VPN provider (experimental) | ||
Some cloud providers, such as Linode, will create machines with "localhost" as the hostname and others may not have a hostname set at all. This can cause problems with domain name resolution. You can run K3s with the `--node-name` flag or `K3S_NODE_NAME` environment variable and this will pass the node name to resolve this issue. | ||
|
||
Available in v1.27.3, v1.26.6, v1.25.11 and newer. | ||
|
||
K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. | ||
|
||
There are four steps to be done with Tailscale before deploying K3s: | ||
|
||
1. Log in to your Tailscale account | ||
|
||
2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster | ||
|
||
3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: | ||
```yaml | ||
"autoApprovers": { | ||
"routes": { | ||
"10.42.0.0/16": ["[email protected]"], | ||
"2001:cafe:42::/56": ["[email protected]"], | ||
}, | ||
}, | ||
``` | ||
|
||
4. Install Tailscale in your nodes: | ||
```bash | ||
curl -fsSL https://tailscale.com/install.sh | sh | ||
``` | ||
|
||
To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: | ||
```bash | ||
--vpn-auth="name=tailscale,joinKey=$AUTH-KEY | ||
``` | ||
or provide that information in a file and use the parameter: | ||
```bash | ||
--vpn-auth-file=$PATH_TO_FILE | ||
``` | ||
Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters | ||
:::warning | ||
If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. | ||
::: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,84 @@ | ||
--- | ||
title: "Distributed hybrid or multicloud cluster" | ||
weight: 25 | ||
--- | ||
|
||
A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. | ||
|
||
:::warning | ||
The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. | ||
::: | ||
|
||
:::warning | ||
Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. | ||
::: | ||
|
||
### Embedded k3s multicloud solution | ||
|
||
K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. | ||
|
||
To enable this type of deployment, you must add the following parameters on servers: | ||
```bash | ||
--node-external-ip=<SERVER_EXTERNAL_IP> --flannel-backend=wireguard-native --flannel-external-ip | ||
``` | ||
and on agents: | ||
```bash | ||
--node-external-ip=<AGENT_EXTERNAL_IP> | ||
``` | ||
|
||
where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. | ||
|
||
Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. | ||
|
||
:::info Dynamic IPs | ||
If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: | ||
|
||
```bash | ||
systemctl daemon-reload | ||
systemctl restart k3s | ||
``` | ||
::: | ||
|
||
### Integration with the Tailscale VPN provider (experimental) | ||
|
||
Available in v1.27.3, v1.26.6, v1.25.11 and newer. | ||
|
||
K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. | ||
|
||
There are four steps to be done with Tailscale before deploying K3s: | ||
|
||
1. Log in to your Tailscale account | ||
|
||
2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster | ||
|
||
3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: | ||
```yaml | ||
"autoApprovers": { | ||
"routes": { | ||
"10.42.0.0/16": ["[email protected]"], | ||
"2001:cafe:42::/56": ["[email protected]"], | ||
}, | ||
}, | ||
``` | ||
|
||
4. Install Tailscale in your nodes: | ||
```bash | ||
curl -fsSL https://tailscale.com/install.sh | sh | ||
``` | ||
|
||
To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: | ||
```bash | ||
--vpn-auth="name=tailscale,joinKey=$AUTH-KEY | ||
``` | ||
or provide that information in a file and use the parameter: | ||
```bash | ||
--vpn-auth-file=$PATH_TO_FILE | ||
``` | ||
Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters | ||
:::warning | ||
If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. | ||
::: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
--- | ||
title: "Multus and IPAM plugins" | ||
weight: 25 | ||
--- | ||
|
||
[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI plugins, instead it acts as a CNI plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV. | ||
|
||
Multus can not be deployed standalone. It always requires at least one conventional CNI plugin that fulfills the Kubernetes cluster network requirements. That CNI plugin becomes the default for Multus, and will be used to provide the primary interface for all pods. When deploying K3s with default options, that CNI plugin is Flannel. | ||
|
||
To deploy Multus, we recommend using the following helm repo: | ||
``` | ||
helm repo add rke2-charts https://rke2-charts.rancher.io | ||
helm repo update | ||
``` | ||
|
||
Then, to set the necessary configuration for it to work, a correct config file must be created. The configuration will depend on the IPAM plugin to be used, i.e. how your pods using Multus extra interfaces will configure the IPs for those extra interfaces. There are three options: host-local, DHCP Daemon and whereabouts: | ||
|
||
<Tabs groupId = "MultusIPAMplugins"> | ||
<TabItem value="host-local" default> | ||
The host-local IPAM plugin allocates ip addresses out of a set of address ranges. It stores the state locally on the host filesystem, hence ensuring uniqueness of IP addresses on a single host. Therefore, we don't recommend it for multi-node clusters. This IPAM plugin does not require any extra deployment. For more information: https://www.cni.dev/plugins/current/ipam/host-local/. | ||
|
||
To use the host-local plugin, please create a file called `multus-values.yaml` with the following content: | ||
``` | ||
config: | ||
cni_conf: | ||
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d | ||
binDir: /var/lib/rancher/k3s/data/current/bin/ | ||
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig | ||
``` | ||
|
||
</TabItem> | ||
<TabItem value="Whereabouts" default> | ||
[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide. | ||
|
||
To use the Whereabouts IPAM plugin, please create a file called multus-values.yaml with the following content: | ||
``` | ||
config: | ||
cni_conf: | ||
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d | ||
binDir: /var/lib/rancher/k3s/data/current/bin/ | ||
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig | ||
rke2-whereabouts: | ||
fullnameOverride: whereabouts | ||
enabled: true | ||
cniConf: | ||
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d | ||
binDir: /var/lib/rancher/k3s/data/current/bin/ | ||
``` | ||
|
||
</TabItem> | ||
<TabItem value="Multus DHCP daemon" default> | ||
The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/). | ||
|
||
To use this DHCP plugin, please create a file called multus-values.yaml with the following content: | ||
``` | ||
config: | ||
cni_conf: | ||
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d | ||
binDir: /var/lib/rancher/k3s/data/current/bin/ | ||
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig | ||
manifests: | ||
dhcpDaemonSet: true | ||
``` | ||
|
||
</TabItem> | ||
</Tabs> | ||
|
||
After creating the `multus-values.yaml` file, everything is ready to install Multus: | ||
``` | ||
helm install multus rke2-charts/rke2-multus -n kube-system --kubeconfig /etc/rancher/k3s/k3s.yaml --values multus-values.yaml | ||
``` | ||
|
||
That will create a daemonset called multus which will deploy multus and all regular cni binaries in /var/lib/rancher/k3s/data/current/ (e.g. macvlan) and the correct Multus config in /var/lib/rancher/k3s/agent/etc/cni/net.d | ||
|
||
For more information about Multus, refer to the [multus-cni](https://github.com/k8snetworkplumbingwg/multus-cni/tree/master/docs) documentation. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.