Skip to content

Commit

Permalink
Organize network docs differently
Browse files Browse the repository at this point in the history
Signed-off-by: Manuel Buil <[email protected]>
  • Loading branch information
manuelbuil committed Mar 27, 2024
1 parent d051e9a commit 8515447
Show file tree
Hide file tree
Showing 20 changed files with 725 additions and 182 deletions.
2 changes: 1 addition & 1 deletion docs/installation/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This section contains instructions for installing K3s in various environments. P

[Configuration Options](configuration.md) provides guidance on the options available to you when installing K3s.

[Network Options](network-options.md) provides guidance on the networking options available in k3s.
[Network Options](network-options/network-options.md) provides guidance on the networking options available in k3s.

[Private Registry Configuration](private-registry.md) covers use of `registries.yaml` to configure container image registry mirrors.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
---
title: "Network Options"
title: "Basic Network Options"
weight: 25
---

This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6.

> **Note:** Please reference the [Networking](../networking.md) page for information about CoreDNS, Traefik, and the Service LB.
> **Note:** Please reference the [Networking](../../networking.md) page for information about CoreDNS, Traefik, and the Service LB.
## Flannel Options

[Flannel](https://github.com/flannel-io/flannel/blob/master/README.md) is a lightweight provider of layer 3 network fabric that implements the Kubernetes Container Network Interface (CNI). It is what is commonly referred to as a CNI Plugin.

* Flannel options can only be set on server nodes, and must be identical on all servers in the cluster.
* The default backend for Flannel is `vxlan`. To enable encryption, use the `wireguard-native` backend.
* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](./requirements.md?os=pi#operating-systems).
* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](../requirements.md?os=pi#operating-systems).
* Using `wireguard-native` as the Flannel backend may require additional modules on some Linux distributions. Please see the [WireGuard Install Guide](https://www.wireguard.com/install/) for details.
The WireGuard install steps will ensure the appropriate kernel modules are installed for your operating system.
You must ensure that WireGuard kernel modules are available on every node, both servers and agents, before attempting to use the WireGuard Flannel backend.
Expand Down Expand Up @@ -120,7 +120,7 @@ K3s agents and servers maintain websocket tunnels between nodes that are used to
This allows agents to operate without exposing the kubelet and container runtime streaming ports to incoming connections, and for the control-plane to connect to cluster services when operating with the agent disabled.
This functionality is equivalent to the [Konnectivity](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/) service commonly used on other Kubernetes distributions, and is managed via the apiserver's egress selector configuration.

The default mode is `agent`. `pod` or `cluster` modes are recommended when running [agentless servers](../advanced.md#running-agentless-servers-experimental), in order to provide the apiserver with access to cluster service endpoints in the absence of flannel and kube-proxy.
The default mode is `agent`. `pod` or `cluster` modes are recommended when running [agentless servers](../../advanced.md#running-agentless-servers-experimental), in order to provide the apiserver with access to cluster service endpoints in the absence of flannel and kube-proxy.

The egress selector mode may be configured on servers via the `--egress-selector-mode` flag, and offers four modes:
* `disabled`: The apiserver does not use agent tunnels to communicate with kubelets or cluster endpoints.
Expand Down Expand Up @@ -183,85 +183,3 @@ Single-stack IPv6 clusters (clusters without IPv4) are supported on K3s using th
```bash
--cluster-cidr=2001:cafe:42::/56 --service-cidr=2001:cafe:43::/112
```

## Distributed hybrid or multicloud cluster

A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider.

:::warning
The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high.
:::

:::warning
Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location.
:::

### Embedded k3s multicloud solution

K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel.

To enable this type of deployment, you must add the following parameters on servers:
```bash
--node-external-ip=<SERVER_EXTERNAL_IP> --flannel-backend=wireguard-native --flannel-external-ip
```
and on agents:
```bash
--node-external-ip=<AGENT_EXTERNAL_IP>
```

where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses.

Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs.

:::info Dynamic IPs
If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run:

```bash
systemctl daemon-reload
systemctl restart k3s
```
:::

### Integration with the Tailscale VPN provider (experimental)

Available in v1.27.3, v1.26.6, v1.25.11 and newer.

K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes.

There are four steps to be done with Tailscale before deploying K3s:

1. Log in to your Tailscale account

2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster

3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza:
```yaml
"autoApprovers": {
"routes": {
"10.42.0.0/16": ["[email protected]"],
"2001:cafe:42::/56": ["[email protected]"],
},
},
```

4. Install Tailscale in your nodes:
```bash
curl -fsSL https://tailscale.com/install.sh | sh
```

To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes:
```bash
--vpn-auth="name=tailscale,joinKey=$AUTH-KEY
```
or provide that information in a file and use the parameter:
```bash
--vpn-auth-file=$PATH_TO_FILE
```
Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters
:::warning
If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster.
:::
84 changes: 84 additions & 0 deletions docs/installation/network-options/distributed-multicloud.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
title: "Distributed hybrid or multicloud cluster"
weight: 25
---

A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider.

:::warning
The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high.
:::

:::warning
Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location.
:::

### Embedded k3s multicloud solution

K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel.

To enable this type of deployment, you must add the following parameters on servers:
```bash
--node-external-ip=<SERVER_EXTERNAL_IP> --flannel-backend=wireguard-native --flannel-external-ip
```
and on agents:
```bash
--node-external-ip=<AGENT_EXTERNAL_IP>
```

where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses.

Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs.

:::info Dynamic IPs
If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run:

```bash
systemctl daemon-reload
systemctl restart k3s
```
:::

### Integration with the Tailscale VPN provider (experimental)

Available in v1.27.3, v1.26.6, v1.25.11 and newer.

K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes.

There are four steps to be done with Tailscale before deploying K3s:

1. Log in to your Tailscale account

2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster

3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza:
```yaml
"autoApprovers": {
"routes": {
"10.42.0.0/16": ["[email protected]"],
"2001:cafe:42::/56": ["[email protected]"],
},
},
```

4. Install Tailscale in your nodes:
```bash
curl -fsSL https://tailscale.com/install.sh | sh
```

To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes:
```bash
--vpn-auth="name=tailscale,joinKey=$AUTH-KEY
```
or provide that information in a file and use the parameter:
```bash
--vpn-auth-file=$PATH_TO_FILE
```
Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters
:::warning
If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster.
:::
75 changes: 75 additions & 0 deletions docs/installation/network-options/multus-ipams.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "Multus and IPAM plugins"
weight: 25
---

[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI plugins, instead it acts as a CNI plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV.

Multus can not be deployed standalone. It always requires at least one conventional CNI plugin that fulfills the Kubernetes cluster network requirements. That CNI plugin becomes the default for Multus, and will be used to provide the primary interface for all pods. When deploying K3s with default options, that CNI plugin is Flannel.

To deploy Multus, we recommend using the following helm repo:
```
helm repo add rke2-charts https://rke2-charts.rancher.io
helm repo update
```

Then, to set the necessary configuration for it to work, a correct config file must be created. The configuration will depend on the IPAM plugin to be used, i.e. how your pods using Multus extra interfaces will configure the IPs for those extra interfaces. There are three options: host-local, DHCP Daemon and whereabouts:

<Tabs groupId = "MultusIPAMplugins">
<TabItem value="host-local" default>
The host-local IPAM plugin allocates ip addresses out of a set of address ranges. It stores the state locally on the host filesystem, hence ensuring uniqueness of IP addresses on a single host. Therefore, we don't recommend it for multi-node clusters. This IPAM plugin does not require any extra deployment. For more information: https://www.cni.dev/plugins/current/ipam/host-local/.

To use the host-local plugin, please create a file called `multus-values.yaml` with the following content:
```
config:
cni_conf:
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
binDir: /var/lib/rancher/k3s/data/current/bin/
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
```

</TabItem>
<TabItem value="Whereabouts" default>
[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide.

To use the Whereabouts IPAM plugin, please create a file called multus-values.yaml with the following content:
```
config:
cni_conf:
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
binDir: /var/lib/rancher/k3s/data/current/bin/
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
rke2-whereabouts:
fullnameOverride: whereabouts
enabled: true
cniConf:
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
binDir: /var/lib/rancher/k3s/data/current/bin/
```

</TabItem>
<TabItem value="Multus DHCP daemon" default>
The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/).

To use this DHCP plugin, please create a file called multus-values.yaml with the following content:
```
config:
cni_conf:
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
binDir: /var/lib/rancher/k3s/data/current/bin/
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
manifests:
dhcpDaemonSet: true
```

</TabItem>
</Tabs>

After creating the `multus-values.yaml` file, everything is ready to install Multus:
```
helm install multus rke2-charts/rke2-multus -n kube-system --kubeconfig /etc/rancher/k3s/k3s.yaml --values multus-values.yaml
```

That will create a daemonset called multus which will deploy multus and all regular cni binaries in /var/lib/rancher/k3s/data/current/ (e.g. macvlan) and the correct Multus config in /var/lib/rancher/k3s/agent/etc/cni/net.d

For more information about Multus, refer to the [multus-cni](https://github.com/k8snetworkplumbingwg/multus-cni/tree/master/docs) documentation.
12 changes: 12 additions & 0 deletions docs/installation/network-options/network-options.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "Network options"
weight: 20
---

This section contains instructions for configuring networking in K3s.

[Basic Network Options](basic-network-options.md) covers the basic networking configuration of the cluster such as flannel and single/dual stack configurations

[Hybrid/Multicloud cluster](distributed-multicloud.md) provides guidance on the options available to span the k3s cluster over remote or hybrid nodes

[Multus and IPAM plugins](multus-ipams.md) provides guidance to leverage Multus in K3s in order to have multiple interfaces per pod
2 changes: 1 addition & 1 deletion docs/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ weight: 35

This page explains how CoreDNS, Traefik Ingress controller, Network Policy controller, and ServiceLB load balancer controller work within K3s.

Refer to the [Installation Network Options](./installation/network-options.md) page for details on Flannel configuration options and backend selection, or how to set up your own CNI.
Refer to the [Installation Network Options](./installation/network-options/network-options.md) page for details on Flannel configuration options and backend selection, or how to set up your own CNI.

For information on which ports need to be opened for K3s, refer to the [Networking Requirements](./installation/requirements.md#networking).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This section contains instructions for installing K3s in various environments. P

[Configuration Options](configuration.md) provides guidance on the options available to you when installing K3s.

[Network Options](network-options.md) provides guidance on the networking options available in k3s.
[Network Options](network-options/network-options.md) provides guidance on the networking options available in k3s.

[Private Registry Configuration](private-registry.md) covers use of `registries.yaml` to configure container image registry mirrors.

Expand Down
Loading

0 comments on commit 8515447

Please sign in to comment.