diff --git a/docs/advanced.md b/docs/advanced.md index a00f0a07..cb947698 100644 --- a/docs/advanced.md +++ b/docs/advanced.md @@ -12,7 +12,7 @@ By default, certificates in RKE2 expire in 12 months. If the certificates are expired or have fewer than 90 days remaining before they expire, the certificates are rotated when RKE2 is restarted. -As of v1.21.8+rke2r1, certificates can also be rotated manually. To do this, it is best to stop the rke2-server process, rotate the certificates, then start the process up again: +Certificates can also be rotated manually. To do this, it is best to stop the rke2-server process, rotate the certificates, then start the process up again: ```sh systemctl stop rke2-server rke2 certificate rotate @@ -90,8 +90,6 @@ Agent nodes are registered via a websocket connection initiated by the `rke2 age Agents register with the server using the cluster secret portion of the join token, along with a randomly generated node-specific password, which is stored on the agent at `/etc/rancher/node/password`. The server will store the passwords for individual nodes as Kubernetes secrets, and any subsequent attempts must use the same password. Node password secrets are stored in the `kube-system` namespace with names using the template `.node-password.rke2`. These secrets are deleted when the corresponding Kubernetes node is deleted. -Note: Prior to RKE2 v1.20.2 servers stored passwords on disk at `/var/lib/rancher/rke2/server/cred/node-passwd`. - If the `/etc/rancher/node` directory of an agent is removed, the password file should be recreated for the agent prior to startup, or the entry removed from the server or Kubernetes cluster (depending on the RKE2 version). ## Starting the Server with the Installation Script diff --git a/docs/datastore/backup_restore.md b/docs/datastore/backup_restore.md index 38b3880a..31423059 100644 --- a/docs/datastore/backup_restore.md +++ b/docs/datastore/backup_restore.md @@ -69,31 +69,27 @@ When rke2 resets the cluster, it creates an empty file at `/var/lib/rancher/rke2 ### Restoring a Snapshot to New Nodes -**Warning:** For all versions of rke2 v.1.20.9 and prior, you will need to back up and restore certificates first due to a known issue in which bootstrap data might not save on restore (Steps 1 - 3 below assume this scenario). See [note](#other-notes-on-restoring-a-snapshot) below for an additional version-specific restore caveat on restore. +1. Back up the token server: `/var/lib/rancher/rke2/server/token` in case you will not use the same one. Token server is used to decrypt the bootstrap data inside the snapshot -1. Back up the following: `/var/lib/rancher/rke2/server/cred`, `/var/lib/rancher/rke2/server/tls`, `/var/lib/rancher/rke2/server/token`, `/etc/rancher` - -2. Restore the certs in Step 1 above to the first new server node. - -3. Install rke2 v1.20.8+rke2r1 on the first new server node as in the following example: -``` -curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION="v1.20.8+rke2r1" sh - -``` - -4. Stop RKE2 service on all server nodes if it is enabled and initiate the restore from snapshot on the first server node with the following commands: +2. Stop RKE2 service on all server nodes if it is enabled and initiate the restore from snapshot on the first server node with the following commands: ``` systemctl stop rke2-server rke2 server \ --cluster-reset \ --cluster-reset-restore-path= + --token= ``` -5. Once the restore process is complete, start the rke2-server service on the first server node as follows: +3. Once the restore process is complete, start the rke2-server service on the first server node as follows: ``` systemctl start rke2-server ``` +:::warning +The node where the snapshot was taken will appear as NotReady +::: + -6. You can continue to add new server and worker nodes to cluster per standard [RKE2 HA installation documentation](../install/ha.md#3-launch-additional-server-nodes). +4. You can continue to add new server and worker nodes to cluster per standard [RKE2 HA installation documentation](../install/ha.md#3-launch-additional-server-nodes). ### Other Notes on Restoring a Snapshot @@ -102,24 +98,6 @@ systemctl start rke2-server * By default, snapshots are enabled and are scheduled to be taken every 12 hours. The snapshots are written to `${data-dir}/server/db/snapshots` with the default `${data-dir}` being `/var/lib/rancher/rke2`. -#### Version-specific requirement for rke2 v1.20.11+rke2r1 - -* When restoring RKE2 from backup to a new node in rke2 v1.20.11+rke2r1, you should ensure that all pods are stopped following the initial restore by running `rke2-killall.sh` as follows: - - ```bash - curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.20.11+rke2r1 - rke2 server \ - --cluster-reset \ - --cluster-reset-restore-path= \ - --token= - rke2-killall.sh - ``` -* Once the restore process is complete, enable and start the rke2-server service on the first server node as follows: - ``` - systemctl enable rke2-server - systemctl start rke2-server - ``` - ## S3 Compatible API Support RKE2 supports writing etcd snapshots to and restoring etcd snapshots from systems with S3-compatible APIs. S3 support is available for both on-demand and scheduled snapshots. diff --git a/docs/install/airgap.md b/docs/install/airgap.md index efccf961..090e7527 100644 --- a/docs/install/airgap.md +++ b/docs/install/airgap.md @@ -39,7 +39,7 @@ If your nodes do not have an interface with a default route, a default route mus ## Tarball Method 1. Download the airgap images tarballs from the RKE release artifacts list for the version and platform of RKE2 you are using. - * Use `rke2-images.linux-amd64.tar.zst`, or `rke2-images.linux-amd64.tar.gz` for releases prior to v1.20. Zstandard offers better compression ratios and faster decompression speeds compared to gzip. + * Use `rke2-images.linux-amd64.tar.zst` or `rke2-images.linux-amd64.tar.gz`. Zstandard offers better compression ratios and faster decompression speeds compared to gzip. * If using the default Canal CNI (`--cni=canal`), you can use either the `rke2-image` legacy archive as described above, or `rke2-images-core` and `rke2-images-canal` archives. * If using the alternative Cilium CNI (`--cni=cilium`), you must download the `rke2-images-core` and `rke2-images-cilium` archives instead. * If using your own CNI (`--cni=none`), you can download only the `rke2-images-core` archive. @@ -49,13 +49,10 @@ If your nodes do not have an interface with a default route, a default route mus 4. [Install RKE2](#install-rke2) ## Private Registry Method -As of RKE2 v1.20, private registry support honors all settings from the [containerd registry configuration](containerd_registry_configuration.md). This includes endpoint override and transport protocol (HTTP/HTTPS), authentication, certificate verification, etc. - -Prior to RKE2 v1.20, private registries must use TLS, with a cert trusted by the host CA bundle. If the registry is using a self-signed cert, you can add the cert to the host CA bundle with `update-ca-certificates`. The registry must also allow anonymous (unauthenticated) access. +Private registry support honors all settings from the [containerd registry configuration](containerd_registry_configuration.md). This includes endpoint override and transport protocol (HTTP/HTTPS), authentication, certificate verification, etc. 1. Add all the required system images to your private registry. A list of images can be obtained from the `.txt` file corresponding to each tarball referenced above, or you may `docker load` the airgap image tarballs, then tag and push the loaded images. -2. If using a private or self-signed certificate on the registry, add the registry's CA cert to the containerd registry configuration, or operating system's trusted certs for releases prior to v1.20. -3. [Install RKE2](#install-rke2) using the `system-default-registry` parameter, or use the [containerd registry configuration](containerd_registry_configuration.md) to use your registry as a mirror for docker.io. +2. [Install RKE2](#install-rke2) using the `system-default-registry` parameter, or use the [containerd registry configuration](containerd_registry_configuration.md) to use your registry as a mirror for docker.io. ## Install RKE2 The following options to install RKE2 should only be performed after completing one of either the [Tarball Method](#tarball-method) or [Private Registry Method](#private-registry-method). diff --git a/docs/install/configuration.md b/docs/install/configuration.md index cd4e7867..56503765 100644 --- a/docs/install/configuration.md +++ b/docs/install/configuration.md @@ -42,9 +42,6 @@ It is also possible to use both a configuration file and CLI arguments. In thes Finally, the location of the config file can be changed either through the cli argument `--config FILE, -c FILE`, or the environment variable `$RKE2_CONFIG_FILE`. ### Multiple Config Files -:::info Version Gate -Available as of [v1.21.2+rke2r1](https://github.com/rancher/rke2/releases/tag/v1.21.2%2Brke2r1) -::: Multiple configuration files are supported. By default, configuration files are read from `/etc/rancher/rke2/config.yaml` and `/etc/rancher/rke2/config.yaml.d/*.yaml` in alphabetical order. diff --git a/docs/install/containerd_registry_configuration.md b/docs/install/containerd_registry_configuration.md index 504faa77..1e524156 100644 --- a/docs/install/containerd_registry_configuration.md +++ b/docs/install/containerd_registry_configuration.md @@ -8,8 +8,6 @@ Upon startup, RKE2 will check to see if a `registries.yaml` file exists at `/etc Note that server nodes are schedulable by default. If you have not tainted the server nodes and will be running workloads on them, please ensure you also create the `registries.yaml` file on each server as well. -**Note:** Prior to RKE2 v1.20, containerd registry configuration is not honored for the initial RKE2 node bootstrapping, only for Kubernetes workloads that are launched after the node is joined to the cluster. Consult the [airgap installation documentation](./airgap.md) if you plan on using this containerd registry feature to bootstrap nodes. - Configuration in containerd can be used to connect to a private registry with a TLS connection and with registries that enable authentication as well. The following section will explain the `registries.yaml` file and give different examples of using private registry configuration in RKE2. ## Registries Configuration File diff --git a/docs/install/quickstart.md b/docs/install/quickstart.md index ccbaec18..a1795cd4 100644 --- a/docs/install/quickstart.md +++ b/docs/install/quickstart.md @@ -11,7 +11,7 @@ This guide will help you quickly launch a cluster with default options. - Make sure your environment fulfills the [requirements.](requirements.md) If NetworkManager is installed and enabled on your hosts, [ensure that it is configured to ignore CNI-managed interfaces.](../known_issues.md#networkmanager) -- For RKE2 versions 1.21 and higher, if the host kernel supports [AppArmor](https://apparmor.net/), the AppArmor tools (usually available via the `apparmor-parser` package) must also be present prior to installing RKE2. +- If the host kernel supports [AppArmor](https://apparmor.net/), the AppArmor tools (usually available via the `apparmor-parser` package) must also be present prior to installing RKE2. - The RKE2 installation process must be run as the root user or through `sudo`. diff --git a/docs/install/windows_airgap.md b/docs/install/windows_airgap.md index c1d555aa..0d569c20 100644 --- a/docs/install/windows_airgap.md +++ b/docs/install/windows_airgap.md @@ -94,13 +94,10 @@ This will require a reboot for the `Containers` feature to properly function. 4. [Install RKE2](#install-windows-rke2) ## Private Registry Method -As of RKE2 v1.20, private registry support honors all settings from the [containerd registry configuration](./containerd_registry_configuration.md). This includes endpoint override and transport protocol (HTTP/HTTPS), authentication, certificate verification, etc. - -Prior to RKE2 v1.20, private registries must use TLS, with a cert trusted by the host CA bundle. If the registry is using a self-signed cert, you can add the cert to the host CA bundle with `update-ca-certificates`. The registry must also allow anonymous (unauthenticated) access. +Private registry support honors all settings from the [containerd registry configuration](./containerd_registry_configuration.md). This includes endpoint override and transport protocol (HTTP/HTTPS), authentication, certificate verification, etc. 1. Add all the required system images to your private registry. A list of images can be obtained from the `.txt` file corresponding to each tarball referenced above, or you may `docker load` the airgap image tarballs, then tag and push the loaded images. -2. If using a private or self-signed certificate on the registry, add the registry's CA cert to the containerd registry configuration, or operating system's trusted certs for releases prior to v1.20. -3. [Install RKE2](#install-windows-rke2) using the `system-default-registry` parameter, or use the [containerd registry configuration](./containerd_registry_configuration.md) to use your registry as a mirror for docker.io. +2. [Install RKE2](#install-windows-rke2) using the `system-default-registry` parameter, or use the [containerd registry configuration](./containerd_registry_configuration.md) to use your registry as a mirror for docker.io. ## Install Windows RKE2 diff --git a/docs/known_issues.md b/docs/known_issues.md index c4880b27..8d787e20 100644 --- a/docs/known_issues.md +++ b/docs/known_issues.md @@ -61,25 +61,6 @@ spec: For more information regarding exact failures with detailed logs when not following these steps, please see [Issue 504](https://github.com/rancher/rke2/issues/504). -## Control Groups V2 - -RKE2 v1.19.5+ ships with `containerd` v1.4.x or later, hence should run on cgroups v2 capable systems. -Older versions (< 1.19.5) are shipped with containerd 1.3.x fork (with back-ported SELinux commits from 1.4.x) -which does not support cgroups v2 and requires a little up-front configuration: - -Assuming a `systemd`-based system, setting the [systemd.unified_cgroup_hierarchy=0](https://www.freedesktop.org/software/systemd/man/systemd.html#systemd.unified_cgroup_hierarchy) -kernel parameter will indicate to systemd that it should run with hybrid (cgroups v1 + v2) support. -Combined with the above, setting the [systemd.legacy_systemd_cgroup_controller](https://www.freedesktop.org/software/systemd/man/systemd.html#systemd.legacy_systemd_cgroup_controller) -kernel parameter will indicate to systemd that it should run with legacy (cgroups v1) support. -As these are kernel command-line arguments they must be set in the system bootloader so that they will be -passed to `systemd` as PID 1 at `/sbin/init`. - -See: - -- [grub2 manual](https://www.gnu.org/software/grub/manual/grub/grub.html#linux) -- [systemd manual](https://www.freedesktop.org/software/systemd/man/systemd.html#Kernel%20Command%20Line) -- [cgroups v2](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html) - ## Calico with vxlan encapsulation diff --git a/docs/networking/multus_sriov.md b/docs/networking/multus_sriov.md index 4dd04e8d..4a90d50e 100644 --- a/docs/networking/multus_sriov.md +++ b/docs/networking/multus_sriov.md @@ -107,8 +107,6 @@ NOTE: You should write this file before starting rke2. ## Using Multus with SR-IOV -**SR-IOV experimental support was added in v1.21.2+rke2r1, and is fully supported starting with the April 2023 releases: v1.26.4+rke2r1, v1.25.9+rke2r1, and v1.24.13+rke2r1** - Using the SR-IOV CNI with Multus can help with data-plane acceleration use cases, providing an extra interface in the pod that can achieve very high throughput. SR-IOV will not work in all environments, and there are several requirements that must be fulfilled to consider the node as SR-IOV capable: diff --git a/docs/security/fips_support.md b/docs/security/fips_support.md index 8b09b191..4cde787b 100644 --- a/docs/security/fips_support.md +++ b/docs/security/fips_support.md @@ -45,11 +45,11 @@ To ensure that all aspects of the system architecture are using FIPS 140-2 compl ## CNI -As of v1.21.2, RKE2 supports selecting a different CNI via the `--cni` flag and comes bundled with several CNIs including Canal (default), Calico, Cilium, and Multus. Of these, only Canal (the default) is rebuilt for FIPS compliance. +RKE2 supports selecting a different CNI via the `--cni` flag and comes bundled with several CNIs including Canal (default), Calico, Cilium, and Multus. Of these, only Canal (the default) is rebuilt for FIPS compliance. ## Ingress -RKE2 ships with NGINX as its default ingress provider. As of v1.21+, this component is FIPS compliant. There are two primary sub-components for NGINX ingress: +RKE2 ships with FIPS compliant NGINX as its default ingress provider. There are two primary sub-components for NGINX ingress: - controller - responsible for monitoring/updating Kubernetes resources and configuring the server accordingly - server - responsible for accepting and routing traffic