Replies: 3 comments
-
In the K3s server CLI docs, we say:
We're planning on updating the RKE2 docs to incorporate changes from the K3s docs. Do you think this would make things more clear? For server options, settings that affect cluster-wide configuration should be the same on all servers. Agent options affect only the local node, so if you want all the nodes configured identically, you should make sure to use identical options.
Agents don't apply manifests; only servers do. It is up to you to decide if you want to deploy manifests from all your servers, or only a few. If you do decide to deploy them from multiple servers, it is your responsibility to ensure that they are in sync across all servers, or different servers will try to apply conflicting changes. This is also called out in the updated docs: https://docs.k3s.io/installation/packaged-components#user-addons |
Beta Was this translation helpful? Give feedback.
-
Hi, UPDATE: I'm using Ubuntu 22.04 LTS In order to make thing symmetrical, I have developed script for such deployments. I'm using KubeVIP and multiMaster with 5 worker nodes where. Using Multus to enable storage traffic offloading for
this is stess test cluster deployment where I'm checking the load for storage traffic on a different network and flannel using different internface. each node ( CP and worker ) have 5 interfaces like below: root@devops61:~/root# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:91:5c:fc brd ff:ff:ff:ff:ff:ff
inet 10.192.168.61/16 brd 10.192.255.255 scope global noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::a27d:1a71:81e5:4789/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:f7:eb:dc brd ff:ff:ff:ff:ff:ff
inet 172.16.1.61/16 brd 172.16.255.255 scope global noprefixroute enp2s0
valid_lft forever preferred_lft forever
inet6 fe80::53f8:4685:ad72:b5f7/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:36:8a:0e brd ff:ff:ff:ff:ff:ff
inet 172.17.1.61/16 brd 172.17.255.255 scope global noprefixroute enp3s0
valid_lft forever preferred_lft forever
inet6 fe80::de77:9a1c:3fe2:db08/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:6e:1c:9d brd ff:ff:ff:ff:ff:ff
inet 172.19.1.61/16 brd 172.19.255.255 scope global noprefixroute enp4s0
valid_lft forever preferred_lft forever
inet6 fe80::501a:2a96:cfe6:7514/64 scope link noprefixroute
valid_lft forever preferred_lft forever
6: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:c2:65:cc brd ff:ff:ff:ff:ff:ff
inet 172.18.1.61/16 brd 172.18.255.255 scope global noprefixroute enp5s0
valid_lft forever preferred_lft forever
inet6 fe80::3e70:1359:ced8:3d0d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Scripts is given below: # cat rke2-install-ipvs-mode.sh
#cni: multus,canal
#disable: rke2-ingress-nginx,rke2-metrics-server
#disable-cloud-controller: true
#kubelet-arg: # default Cgroups, enable/disable for both CP and Workers
#- "cgroup-driver=systemd"
#container-runtime-endpoint: "/var/run/crio/crio.sock"
# INSTALL_RKE2_VERSION=stable
# kube-proxy-arg # default IPTABLES, enable/disable for both CP and Workers
# - proxy-mode=ipvs
#node-taint:
# - "CriticalAddonsOnly=true:NoExecute"
###########################################################################
#
# Enable this HelmChartConfig ONLY ( CP only )
# if you want the traffic between nodes
# routed via seprate interface e.g for data-paths
#
##########################################################################
# /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
#---
#apiVersion: helm.cattle.io/v1
#kind: HelmChartConfig
#metadata:
# name: rke2-canal
# namespace: kube-system
#spec:
# valuesContent: |-
# flannel:
# iface: "eth1"
###########################################################################
#########################################################################
#
# For IPVS mode to work properly, run this code on all the nodes ( both CP and Workers )
#
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
# !/bin/bash
# modprobe -- ip_vs
# modprobe -- ip_vs_rr
# modprobe -- ip_vs_wrr
# modprobe -- ip_vs_sh
# modprobe -- nf_conntrack
# EOF
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
#########################################################################
mkdir -p /etc/rancher/rke2/
mkdir -p /var/lib/rancher/rke2/server/manifests/
cat<<EOF|tee /etc/rancher/rke2/config.yaml
tls-san:
- devops67.ef.com
- 10.192.168.67
- devops61.ef.com
- 10.192.168.61
- devops62.ef.com
- 10.192.168.62
- devops63.ef.com
- 10.192.168.63
write-kubeconfig-mode: "0644"
etcd-expose-metrics: true
kube-proxy-arg:
- proxy-mode=ipvs
cni:
- multus
- canal
EOF
# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
# echo "Starting RKE2 Engine"
# systemctl start rke2-server
# echo "Enabling RKE2 Engine"
# systemctl enable rke2-server
# The RKE2-ingress-nginx doesn't allow snippets to be included in the ingresses. Enable it here
# https://docs.rke2.io/networking?_highlight=ipvs#nginx-ingress-controller
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
config:
use-forwarded-headers: "true"
allowSnippetAnnotations: "true"
EOF
# NodeLocal DNSCache improves the performance by running a dns caching agent on each node.
# To activate this feature, apply the following HelmChartConfig:
#
# Note that we are using IPVS Mode and this has to be enabled in the CoreDNS for localCache modes
# https://docs.rke2.io/networking?_highlight=ipvs#nodelocal-dnscache
cat<<EOF | tee /var/lib/rancher/rke2/server/manifests/rke2-coredns-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-coredns
namespace: kube-system
spec:
valuesContent: |-
nodelocal:
enabled: true
ipvs: true
EOF
# Whereabouts is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide.
# Starting with RKE2 1.22, RKE2 includes the option to use Whereabouts with Multus to manage the IP addresses of the additional interfaces created through Multus.
# In order to do this, you need to use HelmChartConfig to configure the Multus CNI to use Whereabouts.
# https://docs.rke2.io/install/network_options?_highlight=multus#using-multus-with-the-whereabouts-cni
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-multus-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-multus
namespace: kube-system
spec:
valuesContent: |-
rke2-whereabouts:
enabled: true
EOF
cat<<EOF | tee /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-canal
namespace: kube-system
spec:
valuesContent: |-
flannel:
iface: "enp2s0"
EOF
echo "Starting RKE2 Deployment"
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
echo "Starting RKE2 Engine"
systemctl start rke2-server
echo "Enabling RKE2 Engine"
systemctl enable rke2-server
echo "status of RKE2"
systemctl status --no-pager rke2-server -l
echo "Setting up PATH and KUBECONFIG"
export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
#Kube-VIP setup
export VIP=10.192.168.67; export INTERFACE=enp1s0
curl https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/rke2/server/manifests/kube-vip-rbac.yaml
crictl -r "unix:///run/k3s/containerd/containerd.sock" pull ghcr.io/kube-vip/kube-vip:latest
CONTAINERD_ADDRESS=/run/k3s/containerd/containerd.sock ctr -n k8s.io run --rm --net-host ghcr.io/kube-vip/kube-vip:latest vip /kube-vip manifest daemonset --arp --interface $INTERFACE --address $VIP --controlplane --leaderElection --taint --services --inCluster | tee /var/lib/rancher/rke2/server/manifests/kube-vip.yaml
echo "sleeping for 100 seconds for KubeVIP"
sleep 100
kubectl get ds -n kube-system kube-vip-ds
RKE2_TOKEN=$(cat /var/lib/rancher/rke2/server/node-token)
echo -e "\v\vRun on other Master Nodes\v\v
cat<<MASTER|tee /tmp/rke2.sh
mkdir -p /etc/rancher/rke2/
mkdir -p /var/lib/rancher/rke2/server/manifests/
cat<<EOF|tee /etc/rancher/rke2/config.yaml
server: https://devops67.ef.com:9345
token: ${RKE2_TOKEN}
tls-san:
- devops67.ef.com
- 10.192.168.67
- devops61.ef.com
- 10.192.168.61
- devops62.ef.com
- 10.192.168.62
- devops63.ef.com
- 10.192.168.63
write-kubeconfig-mode: \"0644\"
etcd-expose-metrics: true
kube-proxy-arg:
- proxy-mode=ipvs
cni:
- multus
- canal
EOF
# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
# systemctl start rke2-server
# systemctl enable rke2-server
# systemctl status rke2-server
cat<<EOF|tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
config:
use-forwarded-headers: \"true\"
allowSnippetAnnotations: \"true\"
EOF
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-coredns-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-coredns
namespace: kube-system
spec:
valuesContent: |-
nodelocal:
enabled: true
ipvs: true
EOF
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-multus-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-multus
namespace: kube-system
spec:
valuesContent: |-
rke2-whereabouts:
enabled: true
EOF
cat<<EOF | tee /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-canal
namespace: kube-system
spec:
valuesContent: |-
flannel:
iface: "enp2s0"
EOF
echo \"Starting RKE2 Deployment\"
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
echo \"Starting RKE2 Engine\"
systemctl start rke2-server
echo \"Enabling RKE2 Engine\"
systemctl enable rke2-server
echo \"status of RKE2\"
systemctl status --no-pager rke2-server -l
echo \"Setting up PATH and KUBECONFIG\"
export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
MASTER
bash /tmp/rke2.sh
"
echo -e "\v\vRun on other Worker Nodes\v\v
cat<<WORKER|tee /tmp/rke2.sh
mkdir -p /etc/rancher/rke2/
cat<<EOF|tee /etc/rancher/rke2/config.yaml
server: https://devops67.ef.com:9345
token: ${RKE2_TOKEN}
write-kubeconfig-mode: \"0644\"
kube-proxy-arg:
- proxy-mode=ipvs
EOF
echo \"Starting RKE2 Deployment\"
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -
echo \"Starting RKE2 Engine\"
systemctl start rke2-agent.service
echo \"Enabling RKE2 Engine\"
systemctl enable rke2-agent.service
echo \"status of RKE2\"
systemctl status --no-pager -l rke2-agent.service
WORKER
bash /tmp/rke2.sh
"
|
Beta Was this translation helpful? Give feedback.
-
also, are there any constraints involved in deployed the solution to use the internal and external IP addresses of all nodes. what is the best way to setup with POD traffic on a separate interface. I know the Rook/Ceph is a different discussion as RKE2 supports Longhorn ( which also has some variants to use multus based storage networks ) but Longhorn is known to crash with poor network performances causing PVC detachments ( a very bad experience at production customer though) Now the basics are not clear about RKE2 and we need some conformant information how to deploy an RKE2 cluster with multiple options and where they should be fixed |
Beta Was this translation helpful? Give feedback.
-
hi,
We are currently using RKE2 for our production cluster across different domains, so pretty much intensive usage of RKE2 for different purposes.
The problem we are facing is to identify which option should be specified in
/etc/rancher/rke2/config.yaml
file for server and worker nodes.For Example,
When using IPVS Mode, nothing is mentioned in the documentation that this should be present on all the nodes ( both CP and Worker), in documentation.
Similarly, this is also not clear for other functionalities like when you are customizing the RKE2 Helm charts while deployment, all those custom manifests should also be used on the other Control Plane nodes and Worker nodes or its okay to just mention them only the first CP nodes ( We are deploying mostly with KubeVIP )
Regards,
NM Rajput.
Beta Was this translation helpful? Give feedback.
All reactions