diff --git a/docs/datastore/cluster-loadbalancer.md b/docs/datastore/cluster-loadbalancer.md index ae001c312..37afc9ddb 100644 --- a/docs/datastore/cluster-loadbalancer.md +++ b/docs/datastore/cluster-loadbalancer.md @@ -7,7 +7,7 @@ weight: 30 This section describes how to install an external load balancer in front of a High Availability (HA) K3s cluster's server nodes. Two examples are provided: Nginx and HAProxy. :::tip -External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see [Service Load Balancer](../networking.md#service-load-balancer). +External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see [Service Load Balancer](../networking/networking-services.md#service-load-balancer). External load-balancers can be used to provide a fixed registration address for registering nodes, or for external access to the Kubernetes API Server. For exposing LoadBalancer Services, external load-balancers can be used alongside or instead of ServiceLB, but in most cases, replacement load-balancer controllers such as MetalLB or Kube-VIP are a better choice. ::: @@ -191,4 +191,4 @@ server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1 server-3 Ready control-plane,etcd,master 3m16s v1.27.3+k3s1 ``` - \ No newline at end of file + diff --git a/docs/installation/installation.md b/docs/installation/installation.md index 279b5266e..85aa3e43b 100644 --- a/docs/installation/installation.md +++ b/docs/installation/installation.md @@ -7,8 +7,6 @@ This section contains instructions for installing K3s in various environments. P [Configuration Options](configuration.md) provides guidance on the options available to you when installing K3s. -[Network Options](network-options.md) provides guidance on the networking options available in k3s. - [Private Registry Configuration](private-registry.md) covers use of `registries.yaml` to configure container image registry mirrors. [Embedded Mirror](registry-mirror.md) shows how to enable the embedded distributed image registry mirror. diff --git a/docs/installation/network-options.md b/docs/networking/basic-network-options.md similarity index 72% rename from docs/installation/network-options.md rename to docs/networking/basic-network-options.md index d4689529a..d7eff9b7d 100644 --- a/docs/installation/network-options.md +++ b/docs/networking/basic-network-options.md @@ -1,11 +1,9 @@ --- -title: "Network Options" +title: "Basic Network Options" weight: 25 --- -This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6. - -> **Note:** Please reference the [Networking](../networking.md) page for information about CoreDNS, Traefik, and the Service LB. +This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6 or dualStack. ## Flannel Options @@ -13,7 +11,7 @@ This page describes K3s network configuration options, including configuration o * Flannel options can only be set on server nodes, and must be identical on all servers in the cluster. * The default backend for Flannel is `vxlan`. To enable encryption, use the `wireguard-native` backend. -* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](./requirements.md?os=pi#operating-systems). +* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](../installation/requirements.md?os=pi#operating-systems). * Using `wireguard-native` as the Flannel backend may require additional modules on some Linux distributions. Please see the [WireGuard Install Guide](https://www.wireguard.com/install/) for details. The WireGuard install steps will ensure the appropriate kernel modules are installed for your operating system. You must ensure that WireGuard kernel modules are available on every node, both servers and agents, before attempting to use the WireGuard Flannel backend. @@ -183,85 +181,7 @@ Single-stack IPv6 clusters (clusters without IPv4) are supported on K3s using th ```bash --cluster-cidr=2001:cafe:42::/56 --service-cidr=2001:cafe:43::/112 ``` +## Nodes Without a Hostname -## Distributed hybrid or multicloud cluster - -A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. - -:::warning -The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. -::: - -:::warning -Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. -::: - -### Embedded k3s multicloud solution - -K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. - -To enable this type of deployment, you must add the following parameters on servers: -```bash ---node-external-ip= --flannel-backend=wireguard-native --flannel-external-ip -``` -and on agents: -```bash ---node-external-ip= -``` - -where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. - -Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. - -:::info Dynamic IPs -If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: - -```bash -systemctl daemon-reload -systemctl restart k3s -``` -::: - -### Integration with the Tailscale VPN provider (experimental) +Some cloud providers, such as Linode, will create machines with "localhost" as the hostname and others may not have a hostname set at all. This can cause problems with domain name resolution. You can run K3s with the `--node-name` flag or `K3S_NODE_NAME` environment variable and this will pass the node name to resolve this issue. -Available in v1.27.3, v1.26.6, v1.25.11 and newer. - -K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. - -There are four steps to be done with Tailscale before deploying K3s: - -1. Log in to your Tailscale account - -2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster - -3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: -```yaml -"autoApprovers": { - "routes": { - "10.42.0.0/16": ["your_account@xyz.com"], - "2001:cafe:42::/56": ["your_account@xyz.com"], - }, - }, -``` - -4. Install Tailscale in your nodes: -```bash -curl -fsSL https://tailscale.com/install.sh | sh -``` - -To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: -```bash ---vpn-auth="name=tailscale,joinKey=$AUTH-KEY -``` -or provide that information in a file and use the parameter: -```bash ---vpn-auth-file=$PATH_TO_FILE -``` - -Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters - -:::warning - -If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. - -::: diff --git a/docs/networking/distributed-multicloud.md b/docs/networking/distributed-multicloud.md new file mode 100644 index 000000000..347979076 --- /dev/null +++ b/docs/networking/distributed-multicloud.md @@ -0,0 +1,84 @@ +--- +title: "Distributed hybrid or multicloud cluster" +weight: 25 +--- + +A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. + +:::warning +The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. +::: + +:::warning +Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. +::: + +### Embedded k3s multicloud solution + +K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. + +To enable this type of deployment, you must add the following parameters on servers: +```bash +--node-external-ip= --flannel-backend=wireguard-native --flannel-external-ip +``` +and on agents: +```bash +--node-external-ip= +``` + +where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. + +Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. + +:::info Dynamic IPs +If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: + +```bash +systemctl daemon-reload +systemctl restart k3s +``` +::: + +### Integration with the Tailscale VPN provider (experimental) + +Available in v1.27.3, v1.26.6, v1.25.11 and newer. + +K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. + +There are four steps to be done with Tailscale before deploying K3s: + +1. Log in to your Tailscale account + +2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster + +3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: +```yaml +"autoApprovers": { + "routes": { + "10.42.0.0/16": ["your_account@xyz.com"], + "2001:cafe:42::/56": ["your_account@xyz.com"], + }, + }, +``` + +4. Install Tailscale in your nodes: +```bash +curl -fsSL https://tailscale.com/install.sh | sh +``` + +To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: +```bash +--vpn-auth="name=tailscale,joinKey=$AUTH-KEY +``` +or provide that information in a file and use the parameter: +```bash +--vpn-auth-file=$PATH_TO_FILE +``` + +Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters + +:::warning + +If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. + +::: diff --git a/docs/networking/multus-ipams.md b/docs/networking/multus-ipams.md new file mode 100644 index 000000000..86428bba2 --- /dev/null +++ b/docs/networking/multus-ipams.md @@ -0,0 +1,75 @@ +--- +title: "Multus and IPAM plugins" +weight: 25 +--- + +[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI plugins, instead it acts as a CNI plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV. + +Multus can not be deployed standalone. It always requires at least one conventional CNI plugin that fulfills the Kubernetes cluster network requirements. That CNI plugin becomes the default for Multus, and will be used to provide the primary interface for all pods. When deploying K3s with default options, that CNI plugin is Flannel. + +To deploy Multus, we recommend using the following helm repo: +``` +helm repo add rke2-charts https://rke2-charts.rancher.io +helm repo update +``` + +Then, to set the necessary configuration for it to work, a correct config file must be created. The configuration will depend on the IPAM plugin to be used, i.e. how your pods using Multus extra interfaces will configure the IPs for those extra interfaces. There are three options: host-local, DHCP Daemon and whereabouts: + + + +The host-local IPAM plugin allocates ip addresses out of a set of address ranges. It stores the state locally on the host filesystem, hence ensuring uniqueness of IP addresses on a single host. Therefore, we don't recommend it for multi-node clusters. This IPAM plugin does not require any extra deployment. For more information: https://www.cni.dev/plugins/current/ipam/host-local/. + +To use the host-local plugin, please create a file called `multus-values.yaml` with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +``` + + + +[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide. + +To use the Whereabouts IPAM plugin, please create a file called multus-values.yaml with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +rke2-whereabouts: + fullnameOverride: whereabouts + enabled: true + cniConf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ +``` + + + +The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/). + +To use this DHCP plugin, please create a file called multus-values.yaml with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +manifests: + dhcpDaemonSet: true +``` + + + + +After creating the `multus-values.yaml` file, everything is ready to install Multus: +``` +helm install multus rke2-charts/rke2-multus -n kube-system --kubeconfig /etc/rancher/k3s/k3s.yaml --values multus-values.yaml +``` + +That will create a daemonset called multus which will deploy multus and all regular cni binaries in /var/lib/rancher/k3s/data/current/ (e.g. macvlan) and the correct Multus config in /var/lib/rancher/k3s/agent/etc/cni/net.d + +For more information about Multus, refer to the [multus-cni](https://github.com/k8snetworkplumbingwg/multus-cni/tree/master/docs) documentation. diff --git a/docs/networking.md b/docs/networking/networking-services.md similarity index 86% rename from docs/networking.md rename to docs/networking/networking-services.md index c207c5133..6032a949f 100644 --- a/docs/networking.md +++ b/docs/networking/networking-services.md @@ -1,13 +1,13 @@ --- -title: "Networking" +title: "Networking Services" weight: 35 --- This page explains how CoreDNS, Traefik Ingress controller, Network Policy controller, and ServiceLB load balancer controller work within K3s. -Refer to the [Installation Network Options](./installation/network-options.md) page for details on Flannel configuration options and backend selection, or how to set up your own CNI. +Refer to the [Installation Network Options](./basic-network-options.md) page for details on Flannel configuration options and backend selection, or how to set up your own CNI. -For information on which ports need to be opened for K3s, refer to the [Networking Requirements](./installation/requirements.md#networking). +For information on which ports need to be opened for K3s, refer to the [Networking Requirements](../installation/requirements.md#networking). ## CoreDNS @@ -21,9 +21,9 @@ If you don't install CoreDNS, you will need to install a cluster DNS provider yo The Traefik ingress controller deploys a LoadBalancer Service that uses ports 80 and 443. By default, ServiceLB will expose these ports on all cluster members, meaning these ports will not be usable for other HostPort or NodePort pods. -Traefik is deployed by default when starting the server. For more information see [Managing Packaged Components](./installation/packaged-components.md). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml`. +Traefik is deployed by default when starting the server. For more information see [Managing Packaged Components](../installation/packaged-components.md). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml`. -The `traefik.yaml` file should not be edited manually, as K3s will replace the file with defaults at startup. Instead, you should customize Traefik by creating an additional `HelmChartConfig` manifest in `/var/lib/rancher/k3s/server/manifests`. For more details and an example see [Customizing Packaged Components with HelmChartConfig](./helm.md#customizing-packaged-components-with-helmchartconfig). For more information on the possible configuration values, refer to the official [Traefik Helm Configuration Parameters.](https://github.com/traefik/traefik-helm-chart/tree/master/traefik). +The `traefik.yaml` file should not be edited manually, as K3s will replace the file with defaults at startup. Instead, you should customize Traefik by creating an additional `HelmChartConfig` manifest in `/var/lib/rancher/k3s/server/manifests`. For more details and an example see [Customizing Packaged Components with HelmChartConfig](../helm.md#customizing-packaged-components-with-helmchartconfig). For more information on the possible configuration values, refer to the official [Traefik Helm Configuration Parameters.](https://github.com/traefik/traefik-helm-chart/tree/master/traefik). To remove Traefik from your cluster, start all servers with the `--disable=traefik` flag. @@ -104,6 +104,4 @@ Before deploying an external CCM, you must start all K3s servers with the `--dis If you disable the built-in CCM and do not deploy and properly configure an external substitute, nodes will remain tainted and unschedulable. ::: -## Nodes Without a Hostname -Some cloud providers, such as Linode, will create machines with "localhost" as the hostname and others may not have a hostname set at all. This can cause problems with domain name resolution. You can run K3s with the `--node-name` flag or `K3S_NODE_NAME` environment variable and this will pass the node name to resolve this issue. diff --git a/docs/networking/networking.md b/docs/networking/networking.md new file mode 100644 index 000000000..3aa64ccb9 --- /dev/null +++ b/docs/networking/networking.md @@ -0,0 +1,14 @@ +--- +title: "Networking" +weight: 20 +--- + +This section contains instructions for configuring networking in K3s. + +[Basic Network Options](basic-network-options.md) covers the basic networking configuration of the cluster such as flannel and single/dual stack configurations + +[Hybrid/Multicloud cluster](distributed-multicloud.md) provides guidance on the options available to span the k3s cluster over remote or hybrid nodes + +[Multus and IPAM plugins](multus-ipams.md) provides guidance to leverage Multus in K3s in order to have multiple interfaces per pod + +[Networking services: dns, ingress, etc](networking-services.md) explains how CoreDNS, Traefik, Network Policy controller and ServiceLB controller work within k3s diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md b/i18n/kr/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md index e69de29bb..37afc9ddb 100644 --- a/i18n/kr/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md +++ b/i18n/kr/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md @@ -0,0 +1,194 @@ +--- +title: Cluster Load Balancer +weight: 30 +--- + + +This section describes how to install an external load balancer in front of a High Availability (HA) K3s cluster's server nodes. Two examples are provided: Nginx and HAProxy. + +:::tip +External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see [Service Load Balancer](../networking/networking-services.md#service-load-balancer). + +External load-balancers can be used to provide a fixed registration address for registering nodes, or for external access to the Kubernetes API Server. For exposing LoadBalancer Services, external load-balancers can be used alongside or instead of ServiceLB, but in most cases, replacement load-balancer controllers such as MetalLB or Kube-VIP are a better choice. +::: + +## Prerequisites + +All nodes in this example are running Ubuntu 20.04. + +For both examples, assume that a [HA K3s cluster with embedded etcd](../datastore/ha-embedded.md) has been installed on 3 nodes. + +Each k3s server is configured with: +```yaml +# /etc/rancher/k3s/config.yaml +token: lb-cluster-gd +tls-san: 10.10.10.100 +``` + +The nodes have hostnames and IPs of: +* server-1: `10.10.10.50` +* server-2: `10.10.10.51` +* server-3: `10.10.10.52` + + +Two additional nodes for load balancing are configured with hostnames and IPs of: +* lb-1: `10.10.10.98` +* lb-2: `10.10.10.99` + +Three additional nodes exist with hostnames and IPs of: +* agent-1: `10.10.10.101` +* agent-2: `10.10.10.102` +* agent-3: `10.10.10.103` + +## Setup Load Balancer + + + +[HAProxy](http://www.haproxy.org/) is an open source option that provides a TCP load balancer. It also supports HA for the load balancer itself, ensuring redundancy at all levels. See [HAProxy Documentation](http://docs.haproxy.org/2.8/intro.html) for more info. + +Additionally, we will use KeepAlived to generate a virtual IP (VIP) that will be used to access the cluster. See [KeepAlived Documentation](https://www.keepalived.org/manpage.html) for more info. + + + +1) Install HAProxy and KeepAlived: + +```bash +sudo apt-get install haproxy keepalived +``` + +2) Add the following to `/etc/haproxy/haproxy.cfg` on lb-1 and lb-2: + +``` +frontend k3s-frontend + bind *:6443 + mode tcp + option tcplog + default_backend k3s-backend + +backend k3s-backend + mode tcp + option tcp-check + balance roundrobin + default-server inter 10s downinter 5s + server server-1 10.10.10.50:6443 check + server server-2 10.10.10.51:6443 check + server server-3 10.10.10.52:6443 check +``` +3) Add the following to `/etc/keepalived/keepalived.conf` on lb-1 and lb-2: + +``` +vrrp_script chk_haproxy { + script 'killall -0 haproxy' # faster than pidof + interval 2 +} + +vrrp_instance haproxy-vip { + interface eth1 + state # MASTER on lb-1, BACKUP on lb-2 + priority # 200 on lb-1, 100 on lb-2 + + virtual_router_id 51 + + virtual_ipaddress { + 10.10.10.100/24 + } + + track_script { + chk_haproxy + } +} +``` + +6) Restart HAProxy and KeepAlived on lb-1 and lb-2: + +```bash +systemctl restart haproxy +systemctl restart keepalived +``` + +5) On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster: + +```bash +curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.100:6443 +``` + +You can now use `kubectl` from server node to interact with the cluster. +```bash +root@server-1 $ k3s kubectl get nodes -A +NAME STATUS ROLES AGE VERSION +agent-1 Ready 32s v1.27.3+k3s1 +agent-2 Ready 20s v1.27.3+k3s1 +agent-3 Ready 9s v1.27.3+k3s1 +server-1 Ready control-plane,etcd,master 4m22s v1.27.3+k3s1 +server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1 +server-3 Ready control-plane,etcd,master 3m12s v1.27.3+k3s1 +``` + + + + + +## Nginx Load Balancer + +:::danger +Nginx does not natively support a High Availability (HA) configuration. If setting up an HA cluster, having a single load balancer in front of K3s will reintroduce a single point of failure. +::: + +[Nginx Open Source](http://nginx.org/) provides a TCP load balancer. See [Using nginx as HTTP load balancer](https://nginx.org/en/docs/http/load_balancing.html) for more info. + +1) Create a `nginx.conf` file on lb-1 with the following contents: + +``` +events {} + +stream { + upstream k3s_servers { + server 10.10.10.50:6443; + server 10.10.10.51:6443; + server 10.10.10.52:6443; + } + + server { + listen 6443; + proxy_pass k3s_servers; + } +} +``` + +2) Run the Nginx load balancer on lb-1: + +Using docker: + +```bash +docker run -d --restart unless-stopped \ + -v ${PWD}/nginx.conf:/etc/nginx/nginx.conf \ + -p 6443:6443 \ + nginx:stable +``` + +Or [install nginx](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) and then run: + +```bash +cp nginx.conf /etc/nginx/nginx.conf +systemctl start nginx +``` + +3) On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster: + +```bash +curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.99:6443 +``` + +You can now use `kubectl` from server node to interact with the cluster. +```bash +root@server1 $ k3s kubectl get nodes -A +NAME STATUS ROLES AGE VERSION +agent-1 Ready 30s v1.27.3+k3s1 +agent-2 Ready 22s v1.27.3+k3s1 +agent-3 Ready 13s v1.27.3+k3s1 +server-1 Ready control-plane,etcd,master 4m49s v1.27.3+k3s1 +server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1 +server-3 Ready control-plane,etcd,master 3m16s v1.27.3+k3s1 +``` + + diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/installation/installation.md b/i18n/kr/docusaurus-plugin-content-docs/current/installation/installation.md index 279b5266e..85aa3e43b 100644 --- a/i18n/kr/docusaurus-plugin-content-docs/current/installation/installation.md +++ b/i18n/kr/docusaurus-plugin-content-docs/current/installation/installation.md @@ -7,8 +7,6 @@ This section contains instructions for installing K3s in various environments. P [Configuration Options](configuration.md) provides guidance on the options available to you when installing K3s. -[Network Options](network-options.md) provides guidance on the networking options available in k3s. - [Private Registry Configuration](private-registry.md) covers use of `registries.yaml` to configure container image registry mirrors. [Embedded Mirror](registry-mirror.md) shows how to enable the embedded distributed image registry mirror. diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/networking.md b/i18n/kr/docusaurus-plugin-content-docs/current/networking.md index e77a98b18..ff6384b59 100644 --- a/i18n/kr/docusaurus-plugin-content-docs/current/networking.md +++ b/i18n/kr/docusaurus-plugin-content-docs/current/networking.md @@ -5,7 +5,7 @@ weight: 35 이 페이지는 CoreDNS, Traefik 인그레스 컨트롤러, Klipper 서비스 로드밸런서가 K3s 내에서 작동하는 방식을 설명합니다. -Flannel 구성 옵션 및 백엔드 선택에 대한 자세한 내용이나 자체 CNI 설정 방법은 [설치 네트워크 옵션](./installation/network-options.md) 페이지를 참조하세요. +Flannel 구성 옵션 및 백엔드 선택에 대한 자세한 내용이나 자체 CNI 설정 방법은 [설치 네트워크 옵션](./networking/basic-network-options.md) 페이지를 참조하세요. K3s를 위해 어떤 포트를 열어야 하는지에 대한 정보는 [네트워킹 요구 사항](./installation/requirements.md#networking)을 참조하세요. diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/installation/network-options.md b/i18n/kr/docusaurus-plugin-content-docs/current/networking/basic-network-options.md similarity index 72% rename from i18n/kr/docusaurus-plugin-content-docs/current/installation/network-options.md rename to i18n/kr/docusaurus-plugin-content-docs/current/networking/basic-network-options.md index d4689529a..d7eff9b7d 100644 --- a/i18n/kr/docusaurus-plugin-content-docs/current/installation/network-options.md +++ b/i18n/kr/docusaurus-plugin-content-docs/current/networking/basic-network-options.md @@ -1,11 +1,9 @@ --- -title: "Network Options" +title: "Basic Network Options" weight: 25 --- -This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6. - -> **Note:** Please reference the [Networking](../networking.md) page for information about CoreDNS, Traefik, and the Service LB. +This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6 or dualStack. ## Flannel Options @@ -13,7 +11,7 @@ This page describes K3s network configuration options, including configuration o * Flannel options can only be set on server nodes, and must be identical on all servers in the cluster. * The default backend for Flannel is `vxlan`. To enable encryption, use the `wireguard-native` backend. -* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](./requirements.md?os=pi#operating-systems). +* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](../installation/requirements.md?os=pi#operating-systems). * Using `wireguard-native` as the Flannel backend may require additional modules on some Linux distributions. Please see the [WireGuard Install Guide](https://www.wireguard.com/install/) for details. The WireGuard install steps will ensure the appropriate kernel modules are installed for your operating system. You must ensure that WireGuard kernel modules are available on every node, both servers and agents, before attempting to use the WireGuard Flannel backend. @@ -183,85 +181,7 @@ Single-stack IPv6 clusters (clusters without IPv4) are supported on K3s using th ```bash --cluster-cidr=2001:cafe:42::/56 --service-cidr=2001:cafe:43::/112 ``` +## Nodes Without a Hostname -## Distributed hybrid or multicloud cluster - -A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. - -:::warning -The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. -::: - -:::warning -Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. -::: - -### Embedded k3s multicloud solution - -K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. - -To enable this type of deployment, you must add the following parameters on servers: -```bash ---node-external-ip= --flannel-backend=wireguard-native --flannel-external-ip -``` -and on agents: -```bash ---node-external-ip= -``` - -where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. - -Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. - -:::info Dynamic IPs -If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: - -```bash -systemctl daemon-reload -systemctl restart k3s -``` -::: - -### Integration with the Tailscale VPN provider (experimental) +Some cloud providers, such as Linode, will create machines with "localhost" as the hostname and others may not have a hostname set at all. This can cause problems with domain name resolution. You can run K3s with the `--node-name` flag or `K3S_NODE_NAME` environment variable and this will pass the node name to resolve this issue. -Available in v1.27.3, v1.26.6, v1.25.11 and newer. - -K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. - -There are four steps to be done with Tailscale before deploying K3s: - -1. Log in to your Tailscale account - -2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster - -3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: -```yaml -"autoApprovers": { - "routes": { - "10.42.0.0/16": ["your_account@xyz.com"], - "2001:cafe:42::/56": ["your_account@xyz.com"], - }, - }, -``` - -4. Install Tailscale in your nodes: -```bash -curl -fsSL https://tailscale.com/install.sh | sh -``` - -To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: -```bash ---vpn-auth="name=tailscale,joinKey=$AUTH-KEY -``` -or provide that information in a file and use the parameter: -```bash ---vpn-auth-file=$PATH_TO_FILE -``` - -Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters - -:::warning - -If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. - -::: diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/networking/distributed-multicloud.md b/i18n/kr/docusaurus-plugin-content-docs/current/networking/distributed-multicloud.md new file mode 100644 index 000000000..347979076 --- /dev/null +++ b/i18n/kr/docusaurus-plugin-content-docs/current/networking/distributed-multicloud.md @@ -0,0 +1,84 @@ +--- +title: "Distributed hybrid or multicloud cluster" +weight: 25 +--- + +A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. + +:::warning +The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. +::: + +:::warning +Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. +::: + +### Embedded k3s multicloud solution + +K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. + +To enable this type of deployment, you must add the following parameters on servers: +```bash +--node-external-ip= --flannel-backend=wireguard-native --flannel-external-ip +``` +and on agents: +```bash +--node-external-ip= +``` + +where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. + +Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. + +:::info Dynamic IPs +If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: + +```bash +systemctl daemon-reload +systemctl restart k3s +``` +::: + +### Integration with the Tailscale VPN provider (experimental) + +Available in v1.27.3, v1.26.6, v1.25.11 and newer. + +K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. + +There are four steps to be done with Tailscale before deploying K3s: + +1. Log in to your Tailscale account + +2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster + +3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: +```yaml +"autoApprovers": { + "routes": { + "10.42.0.0/16": ["your_account@xyz.com"], + "2001:cafe:42::/56": ["your_account@xyz.com"], + }, + }, +``` + +4. Install Tailscale in your nodes: +```bash +curl -fsSL https://tailscale.com/install.sh | sh +``` + +To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: +```bash +--vpn-auth="name=tailscale,joinKey=$AUTH-KEY +``` +or provide that information in a file and use the parameter: +```bash +--vpn-auth-file=$PATH_TO_FILE +``` + +Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters + +:::warning + +If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. + +::: diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/networking/multus-ipams.md b/i18n/kr/docusaurus-plugin-content-docs/current/networking/multus-ipams.md new file mode 100644 index 000000000..86428bba2 --- /dev/null +++ b/i18n/kr/docusaurus-plugin-content-docs/current/networking/multus-ipams.md @@ -0,0 +1,75 @@ +--- +title: "Multus and IPAM plugins" +weight: 25 +--- + +[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI plugins, instead it acts as a CNI plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV. + +Multus can not be deployed standalone. It always requires at least one conventional CNI plugin that fulfills the Kubernetes cluster network requirements. That CNI plugin becomes the default for Multus, and will be used to provide the primary interface for all pods. When deploying K3s with default options, that CNI plugin is Flannel. + +To deploy Multus, we recommend using the following helm repo: +``` +helm repo add rke2-charts https://rke2-charts.rancher.io +helm repo update +``` + +Then, to set the necessary configuration for it to work, a correct config file must be created. The configuration will depend on the IPAM plugin to be used, i.e. how your pods using Multus extra interfaces will configure the IPs for those extra interfaces. There are three options: host-local, DHCP Daemon and whereabouts: + + + +The host-local IPAM plugin allocates ip addresses out of a set of address ranges. It stores the state locally on the host filesystem, hence ensuring uniqueness of IP addresses on a single host. Therefore, we don't recommend it for multi-node clusters. This IPAM plugin does not require any extra deployment. For more information: https://www.cni.dev/plugins/current/ipam/host-local/. + +To use the host-local plugin, please create a file called `multus-values.yaml` with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +``` + + + +[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide. + +To use the Whereabouts IPAM plugin, please create a file called multus-values.yaml with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +rke2-whereabouts: + fullnameOverride: whereabouts + enabled: true + cniConf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ +``` + + + +The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/). + +To use this DHCP plugin, please create a file called multus-values.yaml with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +manifests: + dhcpDaemonSet: true +``` + + + + +After creating the `multus-values.yaml` file, everything is ready to install Multus: +``` +helm install multus rke2-charts/rke2-multus -n kube-system --kubeconfig /etc/rancher/k3s/k3s.yaml --values multus-values.yaml +``` + +That will create a daemonset called multus which will deploy multus and all regular cni binaries in /var/lib/rancher/k3s/data/current/ (e.g. macvlan) and the correct Multus config in /var/lib/rancher/k3s/agent/etc/cni/net.d + +For more information about Multus, refer to the [multus-cni](https://github.com/k8snetworkplumbingwg/multus-cni/tree/master/docs) documentation. diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/networking/networking-services.md b/i18n/kr/docusaurus-plugin-content-docs/current/networking/networking-services.md new file mode 100644 index 000000000..6032a949f --- /dev/null +++ b/i18n/kr/docusaurus-plugin-content-docs/current/networking/networking-services.md @@ -0,0 +1,107 @@ +--- +title: "Networking Services" +weight: 35 +--- + +This page explains how CoreDNS, Traefik Ingress controller, Network Policy controller, and ServiceLB load balancer controller work within K3s. + +Refer to the [Installation Network Options](./basic-network-options.md) page for details on Flannel configuration options and backend selection, or how to set up your own CNI. + +For information on which ports need to be opened for K3s, refer to the [Networking Requirements](../installation/requirements.md#networking). + +## CoreDNS + +CoreDNS is deployed automatically on server startup. To disable it, configure all servers in the cluster with the `--disable=coredns` option. + +If you don't install CoreDNS, you will need to install a cluster DNS provider yourself. + +## Traefik Ingress Controller + +[Traefik](https://traefik.io/) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications. + +The Traefik ingress controller deploys a LoadBalancer Service that uses ports 80 and 443. By default, ServiceLB will expose these ports on all cluster members, meaning these ports will not be usable for other HostPort or NodePort pods. + +Traefik is deployed by default when starting the server. For more information see [Managing Packaged Components](../installation/packaged-components.md). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml`. + +The `traefik.yaml` file should not be edited manually, as K3s will replace the file with defaults at startup. Instead, you should customize Traefik by creating an additional `HelmChartConfig` manifest in `/var/lib/rancher/k3s/server/manifests`. For more details and an example see [Customizing Packaged Components with HelmChartConfig](../helm.md#customizing-packaged-components-with-helmchartconfig). For more information on the possible configuration values, refer to the official [Traefik Helm Configuration Parameters.](https://github.com/traefik/traefik-helm-chart/tree/master/traefik). + +To remove Traefik from your cluster, start all servers with the `--disable=traefik` flag. + +K3s versions 1.20 and earlier include Traefik v1. K3s versions 1.21 and later install Traefik v2, unless an existing installation of Traefik v1 is found, in which case Traefik is not upgraded to v2. For more information on the specific version of Traefik included with K3s, consult the Release Notes for your version. + +To migrate from an older Traefik v1 instance please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and [migration tool](https://github.com/traefik/traefik-migration-tool). + +## Network Policy Controller + +K3s includes an embedded network policy controller. The underlying implementation is [kube-router's](https://github.com/cloudnativelabs/kube-router) netpol controller library (no other kube-router functionality is present) and can be found [here](https://github.com/k3s-io/k3s/tree/master/pkg/agent/netpol). + +To disable it, start each server with the `--disable-network-policy` flag. + +:::note +Network policy iptables rules are not removed if the K3s configuration is changed to disable the network policy controller. To clean up the configured kube-router network policy rules after disabling the network policy controller, use the `k3s-killall.sh` script, or clean them using `iptables-save` and `iptables-restore`. These steps must be run manually on all nodes in the cluster. +``` +iptables-save | grep -v KUBE-ROUTER | iptables-restore +ip6tables-save | grep -v KUBE-ROUTER | ip6tables-restore +``` +::: + +## Service Load Balancer + +Any LoadBalancer controller can be deployed to your K3s cluster. By default, K3s provides a load balancer known as [ServiceLB](https://github.com/k3s-io/klipper-lb) (formerly Klipper LoadBalancer) that uses available host ports. + +Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain `pending` until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration. + +### How ServiceLB Works + +The ServiceLB controller watches Kubernetes [Services](https://kubernetes.io/docs/concepts/services-networking/service/) with the `spec.type` field set to `LoadBalancer`. + +For each LoadBalancer Service, a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) is created in the `kube-system` namespace. This DaemonSet in turn creates Pods with a `svc-` prefix, on each node. These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port. + +If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's `status.loadBalancer.ingress` address list. Otherwise, the node's internal IP is used. + +If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service. + +It is possible to expose multiple Services on the same node, as long as they use different ports. + +If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending. + +### Usage + +Create a [Service of type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) in K3s. + +### Controlling ServiceLB Node Selection + +Adding the `svccontroller.k3s.cattle.io/enablelb=true` label to one or more nodes switches the ServiceLB controller into allow-list mode, where only nodes with the label are eligible to host LoadBalancer pods. Nodes that remain unlabeled will be excluded from use by ServiceLB. + +:::note +By default, nodes are not labeled. As long as all nodes remain unlabeled, all nodes with ports available will be used by ServiceLB. +::: + +### Creating ServiceLB Node Pools +To select a particular subset of nodes to host pods for a LoadBalancer, add the `enablelb` label to the desired nodes, and set matching `lbpool` label values on the Nodes and Services. For example: + +1. Label Node A and Node B with `svccontroller.k3s.cattle.io/lbpool=pool1` and `svccontroller.k3s.cattle.io/enablelb=true` +2. Label Node C and Node D with `svccontroller.k3s.cattle.io/lbpool=pool2` and `svccontroller.k3s.cattle.io/enablelb=true` +3. Create one LoadBalancer Service on port 443 with label `svccontroller.k3s.cattle.io/lbpool=pool1`. The DaemonSet for this service only deploy Pods to Node A and Node B. +4. Create another LoadBalancer Service on port 443 with label `svccontroller.k3s.cattle.io/lbpool=pool2`. The DaemonSet will only deploy Pods to Node C and Node D. + +### Disabling ServiceLB + +To disable ServiceLB, configure all servers in the cluster with the `--disable=servicelb` flag. + +This is necessary if you wish to run a different LB, such as MetalLB. + +## Deploying an External Cloud Controller Manager + +In order to reduce binary size, K3s removes all "in-tree" (built-in) cloud providers. Instead, K3s provides an embedded Cloud Controller Manager (CCM) stub that does the following: +- Sets node InternalIP and ExternalIP address fields based on the `--node-ip` and `--node-external-ip` flags. +- Hosts the ServiceLB LoadBalancer controller. +- Clears the `node.cloudprovider.kubernetes.io/uninitialized` taint that is present when the cloud-provider is set to `external` + +Before deploying an external CCM, you must start all K3s servers with the `--disable-cloud-controller` flag to disable to embedded CCM. + +:::note +If you disable the built-in CCM and do not deploy and properly configure an external substitute, nodes will remain tainted and unschedulable. +::: + + diff --git a/i18n/kr/docusaurus-plugin-content-docs/current/networking/networking.md b/i18n/kr/docusaurus-plugin-content-docs/current/networking/networking.md new file mode 100644 index 000000000..3aa64ccb9 --- /dev/null +++ b/i18n/kr/docusaurus-plugin-content-docs/current/networking/networking.md @@ -0,0 +1,14 @@ +--- +title: "Networking" +weight: 20 +--- + +This section contains instructions for configuring networking in K3s. + +[Basic Network Options](basic-network-options.md) covers the basic networking configuration of the cluster such as flannel and single/dual stack configurations + +[Hybrid/Multicloud cluster](distributed-multicloud.md) provides guidance on the options available to span the k3s cluster over remote or hybrid nodes + +[Multus and IPAM plugins](multus-ipams.md) provides guidance to leverage Multus in K3s in order to have multiple interfaces per pod + +[Networking services: dns, ingress, etc](networking-services.md) explains how CoreDNS, Traefik, Network Policy controller and ServiceLB controller work within k3s diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md b/i18n/zh/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md index a05c89fdf..37afc9ddb 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/datastore/cluster-loadbalancer.md @@ -1,62 +1,62 @@ --- -title: 集群负载均衡器 +title: Cluster Load Balancer weight: 30 --- -本节介绍如何在高可用性 (HA) K3s 集群的 Server 节点前安装外部负载均衡器。此处提供了两个示例:Nginx 和 HAProxy。 +This section describes how to install an external load balancer in front of a High Availability (HA) K3s cluster's server nodes. Two examples are provided: Nginx and HAProxy. :::tip -不要混淆外部负载均衡器与嵌入式 ServiceLB,后者是一个嵌入式控制器,允许在不部署第三方负载均衡器控制器的情况下使用 Kubernetes LoadBalancer Service。有关更多详细信息,请参阅 [Service Load Balancer](../networking.md#service-load-balancer)。 +External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see [Service Load Balancer](../networking/networking-services.md#service-load-balancer). -外部负载均衡器可用于提供固定的注册地址来注册节点,或用于从外部访问 Kubernetes API Server。为了公开 LoadBalancer Service,外部负载均衡器可以与 ServiceLB 一起使用或代替 ServiceLB,但在大多数情况下,替代负载均衡器控制器(例如 MetalLB 或 Kube-VIP)是更好的选择。 +External load-balancers can be used to provide a fixed registration address for registering nodes, or for external access to the Kubernetes API Server. For exposing LoadBalancer Services, external load-balancers can be used alongside or instead of ServiceLB, but in most cases, replacement load-balancer controllers such as MetalLB or Kube-VIP are a better choice. ::: -## 先决条件 +## Prerequisites -本示例中的所有节点都运行 Ubuntu 20.04。 +All nodes in this example are running Ubuntu 20.04. -这两个示例假设已在 3 个节点上安装了[具有嵌入式 etcd 的 HA K3s 集群](../datastore/ha-embedded.md)。 +For both examples, assume that a [HA K3s cluster with embedded etcd](../datastore/ha-embedded.md) has been installed on 3 nodes. -每个 K3s Server 配置有: +Each k3s server is configured with: ```yaml # /etc/rancher/k3s/config.yaml token: lb-cluster-gd tls-san: 10.10.10.100 ``` -节点的主机名和 IP 为: +The nodes have hostnames and IPs of: * server-1: `10.10.10.50` * server-2: `10.10.10.51` * server-3: `10.10.10.52` -用于负载均衡的两个节点配置了以下主机名和 IP: +Two additional nodes for load balancing are configured with hostnames and IPs of: * lb-1: `10.10.10.98` * lb-2: `10.10.10.99` -存在三个附加节点,其主机名和 IP 为: +Three additional nodes exist with hostnames and IPs of: * agent-1: `10.10.10.101` * agent-2: `10.10.10.102` * agent-3: `10.10.10.103` -## 设置负载均衡器 +## Setup Load Balancer -[HAProxy](http://www.haproxy.org/) 是一个提供 TCP 负载均衡器的开源选项。它还支持负载均衡器本身的 HA,确保各个级别的冗余。有关详细信息,请参阅 [HAProxy 文档](http://docs.haproxy.org/2.8/intro.html)。 +[HAProxy](http://www.haproxy.org/) is an open source option that provides a TCP load balancer. It also supports HA for the load balancer itself, ensuring redundancy at all levels. See [HAProxy Documentation](http://docs.haproxy.org/2.8/intro.html) for more info. -此外,我们将使用 KeepAlived 来生成用于访问集群的虚拟 IP (VIP)。有关详细信息,请参阅 [KeepAlived 文档](https://www.keepalived.org/manpage.html)。 +Additionally, we will use KeepAlived to generate a virtual IP (VIP) that will be used to access the cluster. See [KeepAlived Documentation](https://www.keepalived.org/manpage.html) for more info. -1) 安装 HAProxy 和 KeepAlived: +1) Install HAProxy and KeepAlived: ```bash sudo apt-get install haproxy keepalived ``` -2) 将以下内容添加到 lb-1 和 lb-2 上的 `/etc/haproxy/haproxy.cfg` 中: +2) Add the following to `/etc/haproxy/haproxy.cfg` on lb-1 and lb-2: ``` frontend k3s-frontend @@ -74,7 +74,7 @@ backend k3s-backend server server-2 10.10.10.51:6443 check server server-3 10.10.10.52:6443 check ``` -3) 将以下内容添加到 lb-1 和 lb-2 上的 `/etc/keepalived/keepalived.conf` 中: +3) Add the following to `/etc/keepalived/keepalived.conf` on lb-1 and lb-2: ``` vrrp_script chk_haproxy { @@ -99,20 +99,20 @@ vrrp_instance haproxy-vip { } ``` -6) 在 lb-1 和 lb-2 上重启 HAProxy 和 KeepAlived: +6) Restart HAProxy and KeepAlived on lb-1 and lb-2: ```bash systemctl restart haproxy systemctl restart keepalived ``` -5) 在 agent-1、agent-2、agent-3 上执行以下命令来安装 K3s 并加入集群: +5) On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster: ```bash curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.100:6443 ``` -你现在可以从 Server 节点使用 `kubectl` 与集群交互。 +You can now use `kubectl` from server node to interact with the cluster. ```bash root@server-1 $ k3s kubectl get nodes -A NAME STATUS ROLES AGE VERSION @@ -128,15 +128,15 @@ server-3 Ready control-plane,etcd,master 3m12s v1.27.3+k3s1 -## Nginx 负载均衡器 +## Nginx Load Balancer -:::warning -Nginx 本身不支持高可用性 (HA) 配置。如果设置 HA 集群,在 K3 前面使用单个负载均衡器将重新引入单一故障点。 +:::danger +Nginx does not natively support a High Availability (HA) configuration. If setting up an HA cluster, having a single load balancer in front of K3s will reintroduce a single point of failure. ::: -[Nginx 开源](http://nginx.org/)提供 TCP 负载均衡器。有关详细信息,请参阅[使用 Nginx 作为 HTTP 负载均衡器](https://nginx.org/en/docs/http/load_balancing.html)。 +[Nginx Open Source](http://nginx.org/) provides a TCP load balancer. See [Using nginx as HTTP load balancer](https://nginx.org/en/docs/http/load_balancing.html) for more info. -1) 在 lb-1 上创建一个包含以下内容的 `nginx.conf` 文件: +1) Create a `nginx.conf` file on lb-1 with the following contents: ``` events {} @@ -155,9 +155,9 @@ stream { } ``` -2) 在 lb-1 上运行 Nginx 负载均衡器: +2) Run the Nginx load balancer on lb-1: -使用 Docker: +Using docker: ```bash docker run -d --restart unless-stopped \ @@ -166,20 +166,20 @@ docker run -d --restart unless-stopped \ nginx:stable ``` -或者[安装 Nginx](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) 然后运行: +Or [install nginx](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) and then run: ```bash cp nginx.conf /etc/nginx/nginx.conf systemctl start nginx ``` -3) 在 agent-1、agent-2、agent-3 上执行以下命令来安装 K3s 并加入集群: +3) On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster: ```bash curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.99:6443 ``` -你现在可以从 Server 节点使用 `kubectl` 与集群交互。 +You can now use `kubectl` from server node to interact with the cluster. ```bash root@server1 $ k3s kubectl get nodes -A NAME STATUS ROLES AGE VERSION @@ -191,4 +191,4 @@ server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1 server-3 Ready control-plane,etcd,master 3m16s v1.27.3+k3s1 ``` - \ No newline at end of file + diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/installation/installation.md b/i18n/zh/docusaurus-plugin-content-docs/current/installation/installation.md index 279b5266e..85aa3e43b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/installation/installation.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/installation/installation.md @@ -7,8 +7,6 @@ This section contains instructions for installing K3s in various environments. P [Configuration Options](configuration.md) provides guidance on the options available to you when installing K3s. -[Network Options](network-options.md) provides guidance on the networking options available in k3s. - [Private Registry Configuration](private-registry.md) covers use of `registries.yaml` to configure container image registry mirrors. [Embedded Mirror](registry-mirror.md) shows how to enable the embedded distributed image registry mirror. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/networking.md b/i18n/zh/docusaurus-plugin-content-docs/current/networking.md index 290b35d65..005da42ba 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/networking.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/networking.md @@ -5,7 +5,7 @@ weight: 35 本文介绍了 CoreDNS、Traefik Ingress controller 和 Klipper service load balancer 是如何在 K3s 中工作的。 -有关 Flannel 配置选项和后端选择,以及如何设置自己的 CNI,请参阅[安装网络选项](./installation/network-options.md)页面。 +有关 Flannel 配置选项和后端选择,以及如何设置自己的 CNI,请参阅[安装网络选项](./installation/network-options/network-options.md)页面。 有关 K3s 需要开放哪些端口,请参考[网络要求](./installation/requirements.md#网络)。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/networking/basic-network-options.md b/i18n/zh/docusaurus-plugin-content-docs/current/networking/basic-network-options.md new file mode 100644 index 000000000..d7eff9b7d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/networking/basic-network-options.md @@ -0,0 +1,187 @@ +--- +title: "Basic Network Options" +weight: 25 +--- + +This page describes K3s network configuration options, including configuration or replacement of Flannel, and configuring IPv6 or dualStack. + +## Flannel Options + +[Flannel](https://github.com/flannel-io/flannel/blob/master/README.md) is a lightweight provider of layer 3 network fabric that implements the Kubernetes Container Network Interface (CNI). It is what is commonly referred to as a CNI Plugin. + +* Flannel options can only be set on server nodes, and must be identical on all servers in the cluster. +* The default backend for Flannel is `vxlan`. To enable encryption, use the `wireguard-native` backend. +* Using `vxlan` on Rasperry Pi with recent versions of Ubuntu requires [additional preparation](../installation/requirements.md?os=pi#operating-systems). +* Using `wireguard-native` as the Flannel backend may require additional modules on some Linux distributions. Please see the [WireGuard Install Guide](https://www.wireguard.com/install/) for details. + The WireGuard install steps will ensure the appropriate kernel modules are installed for your operating system. + You must ensure that WireGuard kernel modules are available on every node, both servers and agents, before attempting to use the WireGuard Flannel backend. + + +| CLI Flag and Value | Description | +|--------------------|-------------| +| `--flannel-ipv6-masq` | Apply masquerading rules to IPv6 traffic (default for IPv4). Only applies on dual-stack or IPv6-only clusters. Compatible with any Flannel backend other than `none`. | +| `--flannel-external-ip` | Use node external IP addresses as the destination for Flannel traffic, instead of internal IPs. Only applies when --node-external-ip is set on a node. | +| `--flannel-backend=vxlan` | Use VXLAN to encapsulate the packets. May require additional kernel modules on Raspberry Pi. | +| `--flannel-backend=host-gw` | Use IP routes to pod subnets via node IPs. Requires direct layer 2 connectivity between all nodes in the cluster. | +| `--flannel-backend=wireguard-native` | Use WireGuard to encapsulate and encrypt network traffic. May require additional kernel modules. | +| `--flannel-backend=ipsec` | Use strongSwan IPSec via the `swanctl` binary to encrypt network traffic. (Deprecated; will be removed in v1.27.0) | +| `--flannel-backend=none` | Disable Flannel entirely. | + +:::info Version Gate + +K3s no longer includes strongSwan `swanctl` and `charon` binaries starting with the 2022-12 releases (v1.26.0+k3s1, v1.25.5+k3s1, v1.24.9+k3s1, v1.23.15+k3s1). Please install the correct packages on your node before upgrading to or installing these releases if you want to use the `ipsec` backend. + +::: + +### Migrating from `wireguard` or `ipsec` to `wireguard-native` + +The legacy `wireguard` backend requires installation of the `wg` tool on the host. This backend is not available in K3s v1.26 and higher, in favor of `wireguard-native` backend, which directly interfaces with the kernel. + +The legacy `ipsec` backend requires installation of the `swanctl` and `charon` binaries on the host. This backend is not available in K3s v1.27 and higher, in favor of the `wireguard-native` backend. + +We recommend that users migrate to the new backend as soon as possible. The migration requires a short period of downtime while nodes come up with the new configuration. You should follow these two steps: + +1. Update the K3s config on all server nodes. If using config files, the `/etc/rancher/k3s/config.yaml` should include `flannel-backend: wireguard-native` instead of `flannel-backend: wireguard` or `flannel-backend: ipsec`. If you are configuring K3s via CLI flags in the systemd unit, the equivalent flags should be changed. +2. Reboot all nodes, starting with the servers. + +## Custom CNI + +Start K3s with `--flannel-backend=none` and install your CNI of choice. Most CNI plugins come with their own network policy engine, so it is recommended to set `--disable-network-policy` as well to avoid conflicts. Some important information to take into consideration: + + + + +Visit the [Canal Docs](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel#installing-calico-for-policy-and-flannel-aka-canal-for-networking) website. Follow the steps to install Canal. Modify the Canal YAML so that IP forwarding is allowed in the `container_settings` section, for example: + +```yaml +"container_settings": { + "allow_ip_forwarding": true +} +``` + +Apply the Canal YAML. + +Ensure the settings were applied by running the following command on the host: + +```bash +cat /etc/cni/net.d/10-canal.conflist +``` + +You should see that IP forwarding is set to true. + + + + +Follow the [Calico CNI Plugins Guide](https://docs.tigera.io/calico/latest/reference/configure-cni-plugins). Modify the Calico YAML so that IP forwarding is allowed in the `container_settings` section, for example: + +```yaml +"container_settings": { + "allow_ip_forwarding": true +} +``` + +Apply the Calico YAML. + +Ensure the settings were applied by running the following command on the host: + +```bash +cat /etc/cni/net.d/10-calico.conflist +``` + +You should see that IP forwarding is set to true. + + + + + +Before running `k3s-killall.sh` or `k3s-uninstall.sh`, you must manually remove `cilium_host`, `cilium_net` and `cilium_vxlan` interfaces. If you fail to do this, you may lose network connectivity to the host when K3s is stopped + +```bash +ip link delete cilium_host +ip link delete cilium_net +ip link delete cilium_vxlan +``` + +Additionally, iptables rules for cilium should be removed: + +```bash +iptables-save | grep -iv cilium | iptables-restore +ip6tables-save | grep -iv cilium | ip6tables-restore +``` + + + + +## Control-Plane Egress Selector configuration + +K3s agents and servers maintain websocket tunnels between nodes that are used to encapsulate bidirectional communication between the control-plane (apiserver) and agent (kubelet and containerd) components. +This allows agents to operate without exposing the kubelet and container runtime streaming ports to incoming connections, and for the control-plane to connect to cluster services when operating with the agent disabled. +This functionality is equivalent to the [Konnectivity](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/) service commonly used on other Kubernetes distributions, and is managed via the apiserver's egress selector configuration. + +The default mode is `agent`. `pod` or `cluster` modes are recommended when running [agentless servers](../advanced.md#running-agentless-servers-experimental), in order to provide the apiserver with access to cluster service endpoints in the absence of flannel and kube-proxy. + +The egress selector mode may be configured on servers via the `--egress-selector-mode` flag, and offers four modes: +* `disabled`: The apiserver does not use agent tunnels to communicate with kubelets or cluster endpoints. + This mode requires that servers run the kubelet, CNI, and kube-proxy, and have direct connectivity to agents, or the apiserver will not be able to access service endpoints or perform `kubectl exec` and `kubectl logs`. +* `agent` (default): The apiserver uses agent tunnels to communicate with kubelets. + This mode requires that the servers also run the kubelet, CNI, and kube-proxy, or the apiserver will not be able to access service endpoints. +* `pod`: The apiserver uses agent tunnels to communicate with kubelets and service endpoints, routing endpoint connections to the correct agent by watching Nodes and Endpoints. + **NOTE**: This mode will not work when using a CNI that uses its own IPAM and does not respect the node's PodCIDR allocation. `cluster` or `agent` mode should be used with these CNIs instead. +* `cluster`: The apiserver uses agent tunnels to communicate with kubelets and service endpoints, routing endpoint connections to the correct agent by watching Pods and Endpoints. This mode has the highest portability across different cluster configurations, at the cost of increased overhead. + +## Dual-stack (IPv4 + IPv6) Networking + +:::info Version Gate + +Experimental support is available as of [v1.21.0+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.21.0%2Bk3s1). +Stable support is available as of [v1.23.7+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.23.7%2Bk3s1). + +::: + +:::warning Known Issue + +Before 1.27, Kubernetes [Issue #111695](https://github.com/kubernetes/kubernetes/issues/111695) causes the Kubelet to ignore the node IPv6 addresses if you have a dual-stack environment and you are not using the primary network interface for cluster traffic. To avoid this bug, use 1.27 or newer or add the following flag to both K3s servers and agents: + +``` +--kubelet-arg="node-ip=0.0.0.0" # To proritize IPv4 traffic +#OR +--kubelet-arg="node-ip=::" # To proritize IPv6 traffic +``` + +::: + +Dual-stack networking must be configured when the cluster is first created. It cannot be enabled on an existing cluster once it has been started as IPv4-only. + +To enable dual-stack in K3s, you must provide valid dual-stack `cluster-cidr` and `service-cidr` on all server nodes. This is an example of a valid configuration: + +``` +--cluster-cidr=10.42.0.0/16,2001:cafe:42::/56 --service-cidr=10.43.0.0/16,2001:cafe:43::/112 +``` + +Note that you may configure any valid `cluster-cidr` and `service-cidr` values, but the above masks are recommended. If you change the `cluster-cidr` mask, you should also change the `node-cidr-mask-size-ipv4` and `node-cidr-mask-size-ipv6` values to match the planned pods per node and total node count. The largest supported `service-cidr` mask is /12 for IPv4, and /112 for IPv6. Remember to allow ipv6 traffic if you are deploying in a public cloud. + +If you are using a custom CNI plugin, i.e. a CNI plugin other than Flannel, the additional configuration may be required. Please consult your plugin's dual-stack documentation and verify if network policies can be enabled. + +:::warning Known Issue +When defining cluster-cidr and service-cidr with IPv6 as the primary family, the node-ip of all cluster members should be explicitly set, placing node's desired IPv6 address as the first address. By default, the kubelet always uses IPv4 as the primary address family. +::: + +## Single-stack IPv6 Networking + +:::info Version Gate +Available as of [v1.22.9+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.22.9%2Bk3s1) +::: + +:::warning Known Issue +If your IPv6 default route is set by a router advertisement (RA), you will need to set the sysctl `net.ipv6.conf.all.accept_ra=2`; otherwise, the node will drop the default route once it expires. Be aware that accepting RAs could increase the risk of [man-in-the-middle attacks](https://github.com/kubernetes/kubernetes/issues/91507). +::: + +Single-stack IPv6 clusters (clusters without IPv4) are supported on K3s using the `--cluster-cidr` and `--service-cidr` flags. This is an example of a valid configuration: + +```bash +--cluster-cidr=2001:cafe:42::/56 --service-cidr=2001:cafe:43::/112 +``` +## Nodes Without a Hostname + +Some cloud providers, such as Linode, will create machines with "localhost" as the hostname and others may not have a hostname set at all. This can cause problems with domain name resolution. You can run K3s with the `--node-name` flag or `K3S_NODE_NAME` environment variable and this will pass the node name to resolve this issue. + diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/networking/distributed-multicloud.md b/i18n/zh/docusaurus-plugin-content-docs/current/networking/distributed-multicloud.md new file mode 100644 index 000000000..347979076 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/networking/distributed-multicloud.md @@ -0,0 +1,84 @@ +--- +title: "Distributed hybrid or multicloud cluster" +weight: 25 +--- + +A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the `tailscale` VPN provider. + +:::warning +The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high. +::: + +:::warning +Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location. +::: + +### Embedded k3s multicloud solution + +K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel. + +To enable this type of deployment, you must add the following parameters on servers: +```bash +--node-external-ip= --flannel-backend=wireguard-native --flannel-external-ip +``` +and on agents: +```bash +--node-external-ip= +``` + +where `SERVER_EXTERNAL_IP` is the IP through which we can reach the server node and `AGENT_EXTERNAL_IP` is the IP through which we can reach the agent node. Note that the `K3S_URL` config parameter in the agent should use the `SERVER_EXTERNAL_IP` to be able to connect to it. Remember to check the [Networking Requirements](../installation/requirements.md#networking) and allow access to the listed ports on both internal and external addresses. + +Both `SERVER_EXTERNAL_IP` and `AGENT_EXTERNAL_IP` must have connectivity between them and are normally public IPs. + +:::info Dynamic IPs +If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the `--node-external-ip` parameter to reflect the new IP. If running K3s as a service, you must modify `/etc/systemd/system/k3s.service` then run: + +```bash +systemctl daemon-reload +systemctl restart k3s +``` +::: + +### Integration with the Tailscale VPN provider (experimental) + +Available in v1.27.3, v1.26.6, v1.25.11 and newer. + +K3s can integrate with [Tailscale](https://tailscale.com/) so that nodes use the Tailscale VPN service to build a mesh between nodes. + +There are four steps to be done with Tailscale before deploying K3s: + +1. Log in to your Tailscale account + +2. In `Settings > Keys`, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster + +3. Decide on the podCIDR the cluster will use (by default `10.42.0.0/16`). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza: +```yaml +"autoApprovers": { + "routes": { + "10.42.0.0/16": ["your_account@xyz.com"], + "2001:cafe:42::/56": ["your_account@xyz.com"], + }, + }, +``` + +4. Install Tailscale in your nodes: +```bash +curl -fsSL https://tailscale.com/install.sh | sh +``` + +To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes: +```bash +--vpn-auth="name=tailscale,joinKey=$AUTH-KEY +``` +or provide that information in a file and use the parameter: +```bash +--vpn-auth-file=$PATH_TO_FILE +``` + +Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending `,controlServerURL=$URL` to the vpn-auth parameters + +:::warning + +If you plan on running several K3s clusters using the same tailscale network, please create appropriate [ACLs](https://tailscale.com/kb/1018/acls/) to avoid IP conflicts or use different podCIDR subnets for each cluster. + +::: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/networking/multus-ipams.md b/i18n/zh/docusaurus-plugin-content-docs/current/networking/multus-ipams.md new file mode 100644 index 000000000..86428bba2 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/networking/multus-ipams.md @@ -0,0 +1,75 @@ +--- +title: "Multus and IPAM plugins" +weight: 25 +--- + +[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI plugins, instead it acts as a CNI plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV. + +Multus can not be deployed standalone. It always requires at least one conventional CNI plugin that fulfills the Kubernetes cluster network requirements. That CNI plugin becomes the default for Multus, and will be used to provide the primary interface for all pods. When deploying K3s with default options, that CNI plugin is Flannel. + +To deploy Multus, we recommend using the following helm repo: +``` +helm repo add rke2-charts https://rke2-charts.rancher.io +helm repo update +``` + +Then, to set the necessary configuration for it to work, a correct config file must be created. The configuration will depend on the IPAM plugin to be used, i.e. how your pods using Multus extra interfaces will configure the IPs for those extra interfaces. There are three options: host-local, DHCP Daemon and whereabouts: + + + +The host-local IPAM plugin allocates ip addresses out of a set of address ranges. It stores the state locally on the host filesystem, hence ensuring uniqueness of IP addresses on a single host. Therefore, we don't recommend it for multi-node clusters. This IPAM plugin does not require any extra deployment. For more information: https://www.cni.dev/plugins/current/ipam/host-local/. + +To use the host-local plugin, please create a file called `multus-values.yaml` with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +``` + + + +[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide. + +To use the Whereabouts IPAM plugin, please create a file called multus-values.yaml with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +rke2-whereabouts: + fullnameOverride: whereabouts + enabled: true + cniConf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ +``` + + + +The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/). + +To use this DHCP plugin, please create a file called multus-values.yaml with the following content: +``` +config: + cni_conf: + confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d + binDir: /var/lib/rancher/k3s/data/current/bin/ + kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig +manifests: + dhcpDaemonSet: true +``` + + + + +After creating the `multus-values.yaml` file, everything is ready to install Multus: +``` +helm install multus rke2-charts/rke2-multus -n kube-system --kubeconfig /etc/rancher/k3s/k3s.yaml --values multus-values.yaml +``` + +That will create a daemonset called multus which will deploy multus and all regular cni binaries in /var/lib/rancher/k3s/data/current/ (e.g. macvlan) and the correct Multus config in /var/lib/rancher/k3s/agent/etc/cni/net.d + +For more information about Multus, refer to the [multus-cni](https://github.com/k8snetworkplumbingwg/multus-cni/tree/master/docs) documentation. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/networking/networking-services.md b/i18n/zh/docusaurus-plugin-content-docs/current/networking/networking-services.md new file mode 100644 index 000000000..6032a949f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/networking/networking-services.md @@ -0,0 +1,107 @@ +--- +title: "Networking Services" +weight: 35 +--- + +This page explains how CoreDNS, Traefik Ingress controller, Network Policy controller, and ServiceLB load balancer controller work within K3s. + +Refer to the [Installation Network Options](./basic-network-options.md) page for details on Flannel configuration options and backend selection, or how to set up your own CNI. + +For information on which ports need to be opened for K3s, refer to the [Networking Requirements](../installation/requirements.md#networking). + +## CoreDNS + +CoreDNS is deployed automatically on server startup. To disable it, configure all servers in the cluster with the `--disable=coredns` option. + +If you don't install CoreDNS, you will need to install a cluster DNS provider yourself. + +## Traefik Ingress Controller + +[Traefik](https://traefik.io/) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications. + +The Traefik ingress controller deploys a LoadBalancer Service that uses ports 80 and 443. By default, ServiceLB will expose these ports on all cluster members, meaning these ports will not be usable for other HostPort or NodePort pods. + +Traefik is deployed by default when starting the server. For more information see [Managing Packaged Components](../installation/packaged-components.md). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml`. + +The `traefik.yaml` file should not be edited manually, as K3s will replace the file with defaults at startup. Instead, you should customize Traefik by creating an additional `HelmChartConfig` manifest in `/var/lib/rancher/k3s/server/manifests`. For more details and an example see [Customizing Packaged Components with HelmChartConfig](../helm.md#customizing-packaged-components-with-helmchartconfig). For more information on the possible configuration values, refer to the official [Traefik Helm Configuration Parameters.](https://github.com/traefik/traefik-helm-chart/tree/master/traefik). + +To remove Traefik from your cluster, start all servers with the `--disable=traefik` flag. + +K3s versions 1.20 and earlier include Traefik v1. K3s versions 1.21 and later install Traefik v2, unless an existing installation of Traefik v1 is found, in which case Traefik is not upgraded to v2. For more information on the specific version of Traefik included with K3s, consult the Release Notes for your version. + +To migrate from an older Traefik v1 instance please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and [migration tool](https://github.com/traefik/traefik-migration-tool). + +## Network Policy Controller + +K3s includes an embedded network policy controller. The underlying implementation is [kube-router's](https://github.com/cloudnativelabs/kube-router) netpol controller library (no other kube-router functionality is present) and can be found [here](https://github.com/k3s-io/k3s/tree/master/pkg/agent/netpol). + +To disable it, start each server with the `--disable-network-policy` flag. + +:::note +Network policy iptables rules are not removed if the K3s configuration is changed to disable the network policy controller. To clean up the configured kube-router network policy rules after disabling the network policy controller, use the `k3s-killall.sh` script, or clean them using `iptables-save` and `iptables-restore`. These steps must be run manually on all nodes in the cluster. +``` +iptables-save | grep -v KUBE-ROUTER | iptables-restore +ip6tables-save | grep -v KUBE-ROUTER | ip6tables-restore +``` +::: + +## Service Load Balancer + +Any LoadBalancer controller can be deployed to your K3s cluster. By default, K3s provides a load balancer known as [ServiceLB](https://github.com/k3s-io/klipper-lb) (formerly Klipper LoadBalancer) that uses available host ports. + +Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain `pending` until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration. + +### How ServiceLB Works + +The ServiceLB controller watches Kubernetes [Services](https://kubernetes.io/docs/concepts/services-networking/service/) with the `spec.type` field set to `LoadBalancer`. + +For each LoadBalancer Service, a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) is created in the `kube-system` namespace. This DaemonSet in turn creates Pods with a `svc-` prefix, on each node. These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port. + +If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's `status.loadBalancer.ingress` address list. Otherwise, the node's internal IP is used. + +If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service. + +It is possible to expose multiple Services on the same node, as long as they use different ports. + +If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending. + +### Usage + +Create a [Service of type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) in K3s. + +### Controlling ServiceLB Node Selection + +Adding the `svccontroller.k3s.cattle.io/enablelb=true` label to one or more nodes switches the ServiceLB controller into allow-list mode, where only nodes with the label are eligible to host LoadBalancer pods. Nodes that remain unlabeled will be excluded from use by ServiceLB. + +:::note +By default, nodes are not labeled. As long as all nodes remain unlabeled, all nodes with ports available will be used by ServiceLB. +::: + +### Creating ServiceLB Node Pools +To select a particular subset of nodes to host pods for a LoadBalancer, add the `enablelb` label to the desired nodes, and set matching `lbpool` label values on the Nodes and Services. For example: + +1. Label Node A and Node B with `svccontroller.k3s.cattle.io/lbpool=pool1` and `svccontroller.k3s.cattle.io/enablelb=true` +2. Label Node C and Node D with `svccontroller.k3s.cattle.io/lbpool=pool2` and `svccontroller.k3s.cattle.io/enablelb=true` +3. Create one LoadBalancer Service on port 443 with label `svccontroller.k3s.cattle.io/lbpool=pool1`. The DaemonSet for this service only deploy Pods to Node A and Node B. +4. Create another LoadBalancer Service on port 443 with label `svccontroller.k3s.cattle.io/lbpool=pool2`. The DaemonSet will only deploy Pods to Node C and Node D. + +### Disabling ServiceLB + +To disable ServiceLB, configure all servers in the cluster with the `--disable=servicelb` flag. + +This is necessary if you wish to run a different LB, such as MetalLB. + +## Deploying an External Cloud Controller Manager + +In order to reduce binary size, K3s removes all "in-tree" (built-in) cloud providers. Instead, K3s provides an embedded Cloud Controller Manager (CCM) stub that does the following: +- Sets node InternalIP and ExternalIP address fields based on the `--node-ip` and `--node-external-ip` flags. +- Hosts the ServiceLB LoadBalancer controller. +- Clears the `node.cloudprovider.kubernetes.io/uninitialized` taint that is present when the cloud-provider is set to `external` + +Before deploying an external CCM, you must start all K3s servers with the `--disable-cloud-controller` flag to disable to embedded CCM. + +:::note +If you disable the built-in CCM and do not deploy and properly configure an external substitute, nodes will remain tainted and unschedulable. +::: + + diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/networking/networking.md b/i18n/zh/docusaurus-plugin-content-docs/current/networking/networking.md new file mode 100644 index 000000000..3aa64ccb9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/networking/networking.md @@ -0,0 +1,14 @@ +--- +title: "Networking" +weight: 20 +--- + +This section contains instructions for configuring networking in K3s. + +[Basic Network Options](basic-network-options.md) covers the basic networking configuration of the cluster such as flannel and single/dual stack configurations + +[Hybrid/Multicloud cluster](distributed-multicloud.md) provides guidance on the options available to span the k3s cluster over remote or hybrid nodes + +[Multus and IPAM plugins](multus-ipams.md) provides guidance to leverage Multus in K3s in order to have multiple interfaces per pod + +[Networking services: dns, ingress, etc](networking-services.md) explains how CoreDNS, Traefik, Network Policy controller and ServiceLB controller work within k3s diff --git a/package.json b/package.json index ba0826765..1faa5b94e 100644 --- a/package.json +++ b/package.json @@ -17,7 +17,7 @@ "@docusaurus/core": "^3.2.0", "@docusaurus/plugin-client-redirects": "^3.2.0", "@docusaurus/preset-classic": "^3.2.0", - "@docusaurus/theme-common": "^3.0.1", + "@docusaurus/theme-common": "^3.2.0", "@docusaurus/theme-mermaid": "^3.2.0", "@easyops-cn/docusaurus-search-local": "^0.40.1", "@mdx-js/react": "3.0.1", @@ -32,7 +32,7 @@ "remark-validate-links-heading-id": "^0.0.3" }, "devDependencies": { - "@docusaurus/module-type-aliases": "^3.0.1" + "@docusaurus/module-type-aliases": "^3.2.0" }, "browserslist": { "production": [ diff --git a/sidebars.js b/sidebars.js index 12159e798..b936a4b24 100644 --- a/sidebars.js +++ b/sidebars.js @@ -9,7 +9,6 @@ module.exports = { items:[ 'installation/requirements', 'installation/configuration', - 'installation/network-options', 'installation/private-registry', 'installation/registry-mirror', 'installation/airgap', @@ -65,7 +64,17 @@ module.exports = { 'architecture', 'cluster-access', 'storage', - 'networking', + { + type: 'category', + label: 'Networking', + link: { type: 'doc', id: 'networking/networking'}, + items: [ + 'networking/basic-network-options', + 'networking/distributed-multicloud', + 'networking/multus-ipams', + 'networking/networking-services', + ], + }, 'helm', 'advanced', { diff --git a/yarn.lock b/yarn.lock index fdd0e1c60..68385aec1 100644 --- a/yarn.lock +++ b/yarn.lock @@ -1383,7 +1383,7 @@ vfile "^6.0.1" webpack "^5.88.1" -"@docusaurus/module-type-aliases@3.2.0", "@docusaurus/module-type-aliases@^3.0.1": +"@docusaurus/module-type-aliases@3.2.0", "@docusaurus/module-type-aliases@^3.2.0": version "3.2.0" resolved "https://registry.yarnpkg.com/@docusaurus/module-type-aliases/-/module-type-aliases-3.2.0.tgz#ef883d8418f37e551eca72adc409014e720786d4" integrity sha512-jRSp9YkvBwwNz6Xgy0RJPsnie+Ebb//gy7GdbkJ2pW2gvvlYKGib2+jSF0pfIzvyZLulfCynS1KQdvDKdSl8zQ== @@ -1587,7 +1587,7 @@ tslib "^2.6.0" utility-types "^3.10.0" -"@docusaurus/theme-common@3.2.0", "@docusaurus/theme-common@^3.0.1": +"@docusaurus/theme-common@3.2.0", "@docusaurus/theme-common@^3.2.0": version "3.2.0" resolved "https://registry.yarnpkg.com/@docusaurus/theme-common/-/theme-common-3.2.0.tgz#67f5f1a1e265e1f1a5b9fa7bfb4bf7b98dfcf981" integrity sha512-sFbw9XviNJJ+760kAcZCQMQ3jkNIznGqa6MQ70E5BnbP+ja36kGgPOfjcsvAcNey1H1Rkhh3p2Mhf4HVLdKVVw==