diff --git a/docs/security/cis_self_assessment123.md b/docs/security/cis_self_assessment123.md
index e3d44d40..bd822a2b 100644
--- a/docs/security/cis_self_assessment123.md
+++ b/docs/security/cis_self_assessment123.md
@@ -2,17 +2,15 @@
title: CIS 1.23 Self-Assessment Guide
---
-### CIS Kubernetes Benchmark v1.23 - RKE2
-
-#### Overview
+## Overview
This document is a companion to the RKE2 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of RKE2, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by RKE2 operators, security teams, auditors, and decision makers.
-This guide is specific to the **v1.25** release line of RKE2 and the **v1.23** release of the CIS Kubernetes Benchmark.
+This guide is specific to the **v1.23** release line of RKE2 and the **v1.23** release of the CIS Kubernetes Benchmark.
For more details about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.23 You can download the benchmark after logging in to [CISecurity.org](https://www.cisecurity.org/benchmark/kubernetes/).
-#### Testing controls methodology
+### Testing controls methodology
Each control in the CIS Kubernetes Benchmark was evaluated against an RKE2 cluster that was configured according to the accompanying hardening guide.
@@ -24,9 +22,6 @@ These are the possible results for each control:
- **Not Applicable** - The control is not applicable to RKE2 because of how it is designed to operate. The remediation section will explain why this is so.
- **Manual - Operator Dependent** - The control is Manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure RKE2 does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed.
-### Controls
-
----
## 1 Master Node Security Configuration
### 1.1 Master Node Configuration Files
diff --git a/docs/security/cis_self_assessment124.md b/docs/security/cis_self_assessment124.md
new file mode 100644
index 00000000..a2100caf
--- /dev/null
+++ b/docs/security/cis_self_assessment124.md
@@ -0,0 +1,3087 @@
+---
+title: CIS 1.24 Self-Assessment Guide
+---
+
+## Overview
+
+This document is a companion to the RKE2 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of RKE2, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by RKE2 operators, security teams, auditors, and decision makers.
+
+This guide is specific to the **v1.24** release line of RKE2 and the **v1.24** release of the CIS Kubernetes Benchmark.
+
+For more information about each control, including detailed rationales and descriptions checks, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.24. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
+
+### Testing controls methodology
+
+Each control in the CIS Kubernetes Benchmark was evaluated against a RKE2 cluster that was configured according to the accompanying hardening guide.
+
+These are the possible results for each control:
+
+- **PASS** - The RKE2 cluster under test passed the audit outlined in the benchmark.
+- **Not Applicable** - The control is not applicable to RKE2 because of how it is designed to operate. The rationale section will explain why this is so.
+- **WARN** - The control is manual in the CIS benchmark and depends on the manual operator intervention. The remediation section will provide guidance on how to achieve a PASS result.
+
+## 1 Control Plane Security Configuration
+
+### 1.1 Control Plane Node Configuration Files
+
+#### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml`
+
+#### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml`
+
+
+#### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml`
+
+#### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml`
+
+
+#### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml`
+
+#### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml`
+
+
+#### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml`
+
+#### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml`
+
+
+#### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Note that for many CNIs, a lock file is created with permissions 750. This is expected and can be ignored.
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/cni/networks/ and chmod 600 /etc/cni/net.d/`
+
+#### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
+find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root `
+
+
+#### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result:** permissions has permissions 700, expected 700 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=700
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+`chmod 700 /var/lib/rancher/rke2/server/db/etcd`
+
+
+#### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result:** 'etcd:etcd' is present
+
+
+Returned Value:
+
+```console
+etcd:etcd
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, `chown etcd:etcd /var/lib/rancher/rke2/server/db/etcd`
+
+
+#### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/server/cred/admin.kubeconfig`
+
+
+#### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/server/cred/admin.kubeconfig`
+
+
+#### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig`
+
+
+#### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig`
+
+
+#### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/server/cred/controller.kubeconfig`
+
+
+#### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig`
+
+
+#### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/tls
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown -R root:root /var/lib/rancher/rke2/server/tls/`
+
+
+#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod -R 600 /var/lib/rancher/rke2/server/tls/*.crt`
+
+#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
+```
+
+**Expected Result:** 'permissions' is equal to '600'
+
+
+Returned Value:
+
+```console
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key`
+
+
+### 1.2 API Server
+
+#### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--anonymous-auth' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --anonymous-auth argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove anything similar to below.
+```
+kube-apiserver-arg:
+ - "anonymous-auth=true"
+```
+
+
+#### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--token-auth-file' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Follow the documentation and configure alternate mechanisms for authentication.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove anything similar to below.
+```
+kube-apiserver-arg:
+ - "token-auth-file="
+```
+
+
+#### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set DenyServiceExternalIPs.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=DenyServiceExternalIPs"
+```
+
+
+#### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-https' is present OR '--kubelet-https' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and remove the --kubelet-https parameter.
+
+
+#### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the kubelet client certificate and key.
+They are generated and located at /var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt and /var/lib/rancher/rke2/server/tls/client-kube-apiserver.key
+If for some reason you need to provide your own certificate and key, you can set the
+below parameters in the RKE2 config file /etc/rancher/rke2/config.yaml.
+```
+kube-apiserver-arg:
+ - "kubelet-client-certificate="
+ - "kubelet-client-key="
+```
+
+
+#### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-certificate-authority' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the kubelet CA cert file, at /var/lib/rancher/rke2/server/tls/server-ca.crt.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "kubelet-certificate-authority="
+```
+
+
+#### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' does not have 'AlwaysAllow'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --authorization-mode to AlwaysAllow.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "authorization-mode=AlwaysAllow"
+```
+
+
+#### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' has 'Node'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --authorization-mode to Node and RBAC.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+ensure that you are not overriding authorization-mode.
+
+
+#### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' has 'RBAC'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --authorization-mode to Node and RBAC.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+ensure that you are not overriding authorization-mode.
+
+
+#### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the RKE2 config file /etc/rancher/rke2/config.yaml and set the below parameters.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=...,EventRateLimit,..."
+ - "admission-control-config-file="
+```
+
+#### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --enable-admission-plugins to AlwaysAdmit.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=AlwaysAdmit"
+```
+
+
+#### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Permissive, per CIS guidelines,
+"This setting could impact offline or isolated clusters, which have images pre-loaded and
+do not have access to a registry to pull in-use images. This setting is not appropriate for
+clusters which use this configuration."
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+#### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+
+
+#### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --disable-admission-plugins to anything.
+Follow the documentation and create ServiceAccount objects as per your environment.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "disable-admission-plugins=ServiceAccount"
+```
+
+
+#### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --disable-admission-plugins to anything.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "disable-admission-plugins=...,NamespaceLifecycle,..."
+```
+
+
+#### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' has 'NodeRestriction'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --enable-admission-plugins to NodeRestriction.
+Check the RKE2 config file /etc/rancher/rke2/config.yaml, and ensure that you are not overriding the admission plugins.
+If you are, include NodeRestriction in the list.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=...,NodeRestriction,..."
+```
+
+
+#### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--secure-port' is greater than 0 OR '--secure-port' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+
+#### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "profiling=true"
+```
+
+
+#### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-path' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-path argument to /var/lib/rancher/rke2/server/logs/audit.log
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+```
+kube-apiserver-arg:
+ - "audit-log-path=/var/log/rke2/audit.log"
+```
+
+
+#### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxage' is greater or equal to 30
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxage argument to 30 days.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxage parameter to an appropriate number of days, for example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxage=40"
+```
+
+
+#### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxbackup' is greater or equal to 10
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxbackup argument to 10.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to an appropriate value.
+For example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxbackup=15"
+```
+
+
+#### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxsize' is greater or equal to 100
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxsize argument to 100 MB.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxsize=150"
+```
+
+
+#### 1.2.23 Ensure that the --request-timeout argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--request-timeout' is not present OR '--request-timeout' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Permissive, per CIS guidelines,
+"it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed".
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml
+and set the below parameter if needed. For example,
+```
+kube-apiserver-arg:
+ - "request-timeout=300s"
+```
+
+
+#### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--service-account-lookup' is not present OR '--service-account-lookup' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --service-account-lookup argument.
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml and set the service-account-lookup. For example,
+```
+kube-apiserver-arg:
+ - "service-account-lookup=true"
+```
+Alternatively, you can delete the service-account-lookup parameter from this file so
+that the default takes effect.
+
+
+#### 1.2.25 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--service-account-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 automatically generates and sets the service account key file.
+It is located at /var/lib/rancher/rke2/server/tls/service.key.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "service-account-key-file="
+```
+
+
+#### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--etcd-certfile' is present AND '--etcd-keyfile' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 automatically generates and sets the etcd certificate and key files.
+They are located at /var/lib/rancher/rke2/server/tls/etcd/client.crt and /var/lib/rancher/rke2/server/tls/etcd/client.key.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "etcd-certfile="
+ - "etcd-keyfile="
+```
+
+
+#### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--tls-cert-file' is present AND '--tls-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically generates and provides the TLS certificate and private key for the apiserver.
+They are generated and located at /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt and /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "tls-cert-file="
+ - "tls-private-key-file="
+```
+
+
+#### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--client-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the client certificate authority file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/client-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "client-ca-file="
+```
+
+
+#### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--etcd-cafile' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the etcd certificate authority file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/client-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "etcd-cafile="
+```
+
+
+#### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--encryption-provider-config' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 always is configured to encrypt secrets.
+Secrets encryption is managed with the rke2 secrets-encrypt command line tool.
+If needed, you can find the generated encryption config at /var/lib/rancher/rke2/server/cred/encryption-config.json
+
+
+#### 1.2.33 Ensure that encryption providers are appropriately configured (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if grep aescbc /var/lib/rancher/rke2/server/cred/encryption-config.json; then echo 0; fi'
+```
+
+**Expected Result:** '0' is present
+
+
+Returned Value:
+
+```console
+{"kind":"EncryptionConfiguration","apiVersion":"apiserver.config.k8s.io/v1","resources":[{"resources":["secrets"],"providers":[{"aescbc":{"keys":[{"name":"aescbckey","secret":"WQ1kR9bzM4ipqXPPTCkgiAWj2Wv9ULgXdryqhxoyp1E="}]}},{"identity":{}}]}]}
+0
+```
+
+
+
+Remediation:
+
+RKE2 always is configured to use the aescbc encryption provider to encrypt secrets.
+Secrets encryption is managed with the rke2 secrets-encrypt command line tool.
+If needed, you can find the generated encryption config at /var/lib/rancher/rke2/server/cred/encryption-config.json
+
+
+#### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2539 2487 4 17:27 ? 00:01:20 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, the RKE2 kube-apiserver complies with this test. Changes to these values may cause regression, therefore ensure that all apiserver clients support the new TLS configuration before applying it in production deployments.
+If a custom TLS configuration is required, consider also creating a custom version of this rule that aligns with your requirements.
+If this check fails, remove any custom configuration around `tls-cipher-suites` or update the /etc/rancher/rke2/config.yaml file to match the default by adding the following:
+kube-apiserver-arg:
+- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
+
+
+### 1.3 Controller Manager
+
+#### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--terminated-pod-gc-threshold' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets a terminated-pod-gc-threshold of 1000.
+If you need to change this value, edit the RKE2 config file /etc/rancher/rke2/config.yaml on the control plane node
+and set the --terminated-pod-gc-threshold to an appropriate threshold,
+```
+kube-controller-manager-arg:
+ - "terminated-pod-gc-threshold=10"
+```
+
+
+#### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "profiling=true"
+```
+
+
+#### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--use-service-account-credentials' is not equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --use-service-account-credentials argument to true.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "use-service-account-credentials=false"
+```
+
+
+#### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--service-account-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the service account private key file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/service.current.key.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "service-account-private-key-file="
+```
+
+
+#### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--root-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the root CA file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/server-ca.crt.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "root-ca-file="
+```
+
+
+#### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--feature-gates' does not have 'RotateKubeletServerCertificate=false' OR '--feature-gates' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+
+
+#### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2692 2597 1 17:27 ? 00:00:21 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --bind-address argument to 127.0.0.1
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "bind-address="
+```
+
+
+### 1.4 Scheduler
+
+#### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-scheduler
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2676 2575 0 17:27 ? 00:00:04 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-scheduler-arg:
+ - "profiling=true"
+```
+
+
+#### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-scheduler
+```
+
+**Expected Result:** '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2676 2575 0 17:27 ? 00:00:04 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --bind-address argument to 127.0.0.1
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-scheduler-arg:
+ - "bind-address="
+```
+
+
+## 2 Etcd Node Configuration
+
+#### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+
+```
+
+**Expected Result:** '.client-transport-security.cert-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/server-client.crt' AND '.client-transport-security.key-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/server-client.key'
+
+
+Returned Value:
+
+```console
+advertise-client-urls: https://10.10.10.100:2379
+client-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
+data-dir: /var/lib/rancher/rke2/server/db/etcd
+election-timeout: 5000
+experimental-initial-corrupt-check: true
+heartbeat-interval: 500
+initial-advertise-peer-urls: https://10.10.10.100:2380
+initial-cluster: server-0-11a72b92=https://10.10.10.100:2380
+initial-cluster-state: new
+listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
+listen-metrics-urls: http://127.0.0.1:2381
+listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
+log-outputs:
+- stderr
+logger: zap
+name: server-0-11a72b92
+peer-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt
+snapshot-count: 10000
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates cert and key files for etcd.
+These are located in /var/lib/rancher/rke2/server/tls/etcd/.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use custom cert and key files.
+
+
+#### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_CLIENT_CERT_AUTH' is present OR '.client-transport-security.client-cert-auth' is equal to 'true'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=6f10bbaa24d25998be5d5cb804762c1171adbaec88e64b31a7ae0f1bd66e6e8a
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --client-cert-auth parameter to true.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to disable client certificate authentication.
+
+
+#### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present OR '.client-transport-security.auto-tls' is present
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=6f10bbaa24d25998be5d5cb804762c1171adbaec88e64b31a7ae0f1bd66e6e8a
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --auto-tls parameter.
+If this check fails, edit the etcd pod specification file /var/lib/rancher/rke2/server/db/etcd/config on the master
+node and either remove the --auto-tls parameter or set it to false.
+client-transport-security:
+ auto-tls: false
+
+
+#### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+
+```
+
+**Expected Result:** '.peer-transport-security.cert-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt' AND '.peer-transport-security.key-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key'
+
+
+Returned Value:
+
+```console
+advertise-client-urls: https://10.10.10.100:2379
+client-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
+data-dir: /var/lib/rancher/rke2/server/db/etcd
+election-timeout: 5000
+experimental-initial-corrupt-check: true
+heartbeat-interval: 500
+initial-advertise-peer-urls: https://10.10.10.100:2380
+initial-cluster: server-0-11a72b92=https://10.10.10.100:2380
+initial-cluster-state: new
+listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
+listen-metrics-urls: http://127.0.0.1:2381
+listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
+log-outputs:
+- stderr
+logger: zap
+name: server-0-11a72b92
+peer-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt
+snapshot-count: 10000
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates peer cert and key files for etcd.
+These are located in /var/lib/rancher/rke2/server/tls/etcd/.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use custom peer cert and key files.
+
+
+#### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_PEER_CLIENT_CERT_AUTH' is present OR '.peer-transport-security.client-cert-auth' is equal to 'true'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=6f10bbaa24d25998be5d5cb804762c1171adbaec88e64b31a7ae0f1bd66e6e8a
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --peer-cert-auth parameter to true.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to disable peer client certificate authentication.
+
+
+#### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present OR '.peer-transport-security.auto-tls' is present
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=6f10bbaa24d25998be5d5cb804762c1171adbaec88e64b31a7ae0f1bd66e6e8a
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --peer-auto-tls parameter.
+If this check fails, edit the etcd pod specification file /var/lib/rancher/rke2/server/db/etcd/config on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+peer-transport-security:
+ auto-tls: false
+
+
+#### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_TRUSTED_CA_FILE' is present OR '.peer-transport-security.trusted-ca-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=6f10bbaa24d25998be5d5cb804762c1171adbaec88e64b31a7ae0f1bd66e6e8a
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates a unique certificate authority for etcd.
+This is located at /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use a shared certificate authority.
+
+
+## 3 Control Plane Configuration
+
+### 3.1 Authentication and Authorization
+
+#### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+### 3.2 Logging
+
+#### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep | grep -o audit-policy-file
+```
+
+**Expected Result:** 'audit-policy-file' is equal to 'audit-policy-file'
+
+
+Returned Value:
+
+```console
+audit-policy-file
+```
+
+
+
+Remediation:
+
+Create an audit policy file for your cluster.
+
+
+#### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4 Worker Node Security Configuration
+
+### 4.1 Worker Node Configuration Files
+
+#### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig`
+
+
+#### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig`
+
+
+#### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result:** '600' is equal to '600'
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig`
+
+
+#### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig`
+
+
+#### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/client-ca.crt; then stat -c permissions=%a /var/lib/rancher/rke2/agent/client-ca.crt; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/client-ca.crt`
+
+
+#### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/client-ca.crt; then stat -c %U:%G /var/lib/rancher/rke2/agent/client-ca.crt; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the following command to modify the ownership of the --client-ca-file.
+`chown root:root /var/lib/rancher/rke2/agent/client-ca.crt`
+
+
+#### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
+
+### 4.2 Kubelet
+
+#### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--anonymous-auth' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --anonymous-auth to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "anonymous-auth=true"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--authorization-mode' does not have 'AlwaysAllow'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --authorization-mode to AlwaysAllow.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "authorization-mode=AlwaysAllow"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--client-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the client ca certificate for the Kubelet.
+It is generated and located at /var/lib/rancher/rke2/agent/client-ca.crt
+
+
+#### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--read-only-port' is equal to '0' OR '--read-only-port' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --read-only-port to 0. If you have set this to a different value, you
+should set it back to 0. Edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "read-only-port=XXXX"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--streaming-connection-idle-timeout' is present OR '--streaming-connection-idle-timeout' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "streaming-connection-idle-timeout=5m"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--protect-kernel-defaults' is equal to 'true'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--protect-kernel-defaults=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+
+#### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--make-iptables-util-chains' is present OR '--make-iptables-util-chains' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter.
+```
+kubelet-arg:
+ - "make-iptables-util-chains=true"
+```
+Or, remove the --make-iptables-util-chains argument to let RKE2 use the default value.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.8 Ensure that the --hostname-override argument is not set (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+By default, RKE2 does set the --hostname-override argument. Per CIS guidelines, this is to comply
+with cloud providers that require this flag to ensure that hostname matches node names.
+
+#### 4.2.9 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "event-qps="
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+#### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--tls-cert-file' is present AND '--tls-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the TLS certificate and private key for the Kubelet.
+They are generated and located at /var/lib/rancher/rke2/agent/serving-kubelet.crt and /var/lib/rancher/rke2/agent/serving-kubelet.key
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines similar to below.
+```
+kubelet-arg:
+ - "tls-cert-file="
+ - "tls-private-key-file="
+```
+
+
+#### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--rotate-certificates' is present OR '--rotate-certificates' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --rotate-certificates argument.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any rotate-certificates parameter.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** 'RotateKubeletServerCertificate' is present OR 'RotateKubeletServerCertificate' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2317 2256 2 17:27 ? 00:00:40 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the RotateKubeletServerCertificate feature gate.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any RotateKubeletServerCertificate parameter.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+```
+kubelet-arg:
+ - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
+```
+or to a subset of these values.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+## 5 Kubernetes Policies
+
+### 5.1 RBAC and Service Accounts
+
+#### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+#### 5.1.2 Minimize access to secrets (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+#### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+#### 5.1.4 Minimize access to create pods (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+#### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+check_for_default_sa.sh
+```
+
+**Expected Result:** 'true' is equal to 'true'
+
+
+Returned Value:
+
+```console
+true
+```
+
+
+
+Remediation:
+
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+
+#### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+#### 5.1.7 Avoid use of system:masters group (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+#### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+### 5.2 Pod Security Standards
+
+#### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+#### 5.2.2 Minimize the admission of privileged containers (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp global-restricted-psp && kubectl get psp global-restricted-psp -o json | jq -r ".spec.runAsUser.rule" || kubectl get psp restricted-noroot-psp && kubectl get psp restricted-noroot-psp -o json | jq -r ".spec.runAsUser.rule"
+```
+
+**Expected Result:** 'MustRunAsNonRoot' is equal to 'MustRunAsNonRoot'
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
+global-restricted-psp false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+MustRunAsNonRoot
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+Error from server (NotFound): podsecuritypolicies.policy "restricted-noroot-psp" not found
+```
+
+
+
+Remediation:
+
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+
+#### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result:** 'count' is greater than 0
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+--count=1
+```
+
+
+
+Remediation:
+
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+
+#### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result:** 'count' is greater than 0
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+--count=1
+```
+
+
+
+Remediation:
+
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+
+#### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result:** 'count' is greater than 0
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+--count=1
+```
+
+
+
+Remediation:
+
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+
+#### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result:** 'count' is greater than 0
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+--count=1
+```
+
+
+
+Remediation:
+
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+
+#### 5.2.7 Minimize the admission of root containers (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result:** 'count' is greater than 0
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+--count=1
+```
+
+
+
+Remediation:
+
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+
+#### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get psp global-restricted-psp && kubectl get psp global-restricted-psp -o json | jq -r .spec.requiredDropCapabilities[] || kubectl get psp restricted-noroot-psp && kubectl get psp restricted-noroot-psp -o json | jq -r .spec.requiredDropCapabilities[]
+```
+
+**Expected Result:** 'ALL' is equal to 'ALL'
+
+
+Returned Value:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
+global-restricted-psp false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+ALL
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
+Error from server (NotFound): podsecuritypolicies.policy "restricted-noroot-psp" not found
+```
+
+
+
+Remediation:
+
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+
+#### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+#### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Review the use of capabilities in applications running on your cluster. Where a namespace
+contains applications which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+#### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+#### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+#### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+### 5.3 Network Policies and CNI
+
+#### 5.3.1 Ensure that the CNI in use supports Network Policies (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+kubectl get pods --all-namespaces --selector='k8s-app in (calico-node, canal, cilium)' -o name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result:** 'count' is greater than 0
+
+
+Returned Value:
+
+```console
+--count=1
+```
+
+
+
+Remediation:
+
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+
+#### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+check_for_rke2_network_policies.sh
+```
+
+**Expected Result:** 'true' is equal to 'true'
+
+
+Returned Value:
+
+```console
+true
+```
+
+
+
+Remediation:
+
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+
+### 5.4 Secrets Management
+
+#### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+#### 5.4.2 Consider external secret storage (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+### 5.5 Extensible Admission Control
+
+#### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+### 5.7 General Policies
+
+#### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+#### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+#### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+#### 5.7.4 The default namespace should not be used (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/docs/security/cis_self_assessment17.md b/docs/security/cis_self_assessment17.md
new file mode 100644
index 00000000..c78a89a2
--- /dev/null
+++ b/docs/security/cis_self_assessment17.md
@@ -0,0 +1,2933 @@
+---
+title: CIS 1.7 Self-Assessment Guide
+---
+
+## Overview
+
+This document is a companion to the RKE2 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of RKE2, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by RKE2 operators, security teams, auditors, and decision makers.
+
+This guide is specific to the **v1.25** release line of RKE2 and the **v1.7** release of the CIS Kubernetes Benchmark.
+
+For more information about each control, including detailed rationales and descriptions checks, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
+
+### Testing controls methodology
+
+Each control in the CIS Kubernetes Benchmark was evaluated against a RKE2 cluster that was configured according to the accompanying hardening guide.
+
+These are the possible results for each control:
+
+- **PASS** - The RKE2 cluster under test passed the audit outlined in the benchmark.
+- **Not Applicable** - The control is not applicable to RKE2 because of how it is designed to operate. The rationale section will explain why this is so.
+- **WARN** - The control is manual in the CIS benchmark and depends on the manual operator intervention. The remediation section will provide guidance on how to achieve a PASS result.
+
+## 1 Control Plane Security Configuration
+
+### 1.1 Control Plane Node Configuration Files
+
+#### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml`
+
+
+#### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml`
+
+
+#### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml`
+
+
+#### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml`
+
+
+#### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml`
+
+
+#### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml`
+
+
+#### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml`
+
+
+#### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml`
+
+
+#### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Note that for many CNIs, a lock file is created with permissions 750. This is expected and can be ignored.
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/cni/networks/ and chmod 600 /etc/cni/net.d/`
+
+#### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
+find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root `
+
+
+#### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result:** permissions has permissions 700, expected 700 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=700
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+`chmod 700 /var/lib/rancher/rke2/server/db/etcd`
+
+
+#### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result:** 'etcd:etcd' is present
+
+
+Returned Value:
+
+```console
+etcd:etcd
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, `chown etcd:etcd /var/lib/rancher/rke2/server/db/etcd`
+
+
+#### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/server/cred/admin.kubeconfig`
+
+
+#### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/server/cred/admin.kubeconfig`
+
+
+#### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig`
+
+
+#### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig`
+
+
+#### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/server/cred/controller.kubeconfig`
+
+
+#### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig`
+
+
+#### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/tls
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown -R root:root /var/lib/rancher/rke2/server/tls`
+
+
+#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod -R 600 /var/lib/rancher/rke2/server/tls/*.crt`
+
+#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key`
+
+
+### 1.2 API Server
+
+#### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+By default, RKE2 sets the --anonymous-auth argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove anything similar to below.
+```
+kube-apiserver-arg:
+ - "anonymous-auth=true"
+```
+
+#### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--token-auth-file' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Follow the documentation and configure alternate mechanisms for authentication.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove anything similar to below.
+```
+kube-apiserver-arg:
+ - "token-auth-file="
+```
+
+
+#### 1.2.3 Ensure that the --DenyServiceExternalIPs is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+By default, RKE2 does not set DenyServiceExternalIPs.
+To enable this flag, edit the RKE2 config file /etc/rancher/rke2/config.yaml like below.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=DenyServiceExternalIPs"
+```
+
+#### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the kubelet client certificate and key.
+They are generated and located at /var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt and /var/lib/rancher/rke2/server/tls/client-kube-apiserver.key
+If for some reason you need to provide your own certificate and key, you can set the
+below parameters in the RKE2 config file /etc/rancher/rke2/config.yaml.
+```
+kube-apiserver-arg:
+ - "kubelet-client-certificate="
+ - "kubelet-client-key="
+```
+
+
+#### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-certificate-authority' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the kubelet CA cert file, at /var/lib/rancher/rke2/server/tls/server-ca.crt.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "kubelet-certificate-authority="
+```
+
+
+#### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' does not have 'AlwaysAllow'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --authorization-mode to AlwaysAllow.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "authorization-mode=AlwaysAllow"
+```
+
+
+#### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' has 'Node'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --authorization-mode to Node and RBAC.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+ensure that you are not overriding authorization-mode.
+
+
+#### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' has 'RBAC'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --authorization-mode to Node and RBAC.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+ensure that you are not overriding authorization-mode.
+
+
+#### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the RKE2 config file /etc/rancher/rke2/config.yaml and set the below parameters.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=...,EventRateLimit,..."
+ - "admission-control-config-file="
+```
+
+#### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --enable-admission-plugins to AlwaysAdmit.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=AlwaysAdmit"
+```
+
+
+#### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Permissive, per CIS guidelines,
+"This setting could impact offline or isolated clusters, which have images pre-loaded and
+do not have access to a registry to pull in-use images. This setting is not appropriate for
+clusters which use this configuration."
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+#### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+Enabling Pod Security Policy is no longer supported on RKE2 v1.25+ and will cause applications to unexpectedly fail.
+
+#### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --disable-admission-plugins to anything.
+Follow the documentation and create ServiceAccount objects as per your environment.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "disable-admission-plugins=ServiceAccount"
+```
+
+
+#### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --disable-admission-plugins to anything.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "disable-admission-plugins=...,NamespaceLifecycle,..."
+```
+
+
+#### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' has 'NodeRestriction'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --enable-admission-plugins to NodeRestriction.
+Check the RKE2 config file /etc/rancher/rke2/config.yaml, and ensure that you are not overriding the admission plugins.
+If you are, include NodeRestriction in the list.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=...,NodeRestriction,..."
+```
+
+
+#### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--secure-port' is greater than 0 OR '--secure-port' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+
+#### 1.2.17 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "profiling=true"
+```
+
+
+#### 1.2.18 Ensure that the --audit-log-path argument is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-path' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-path argument to /var/lib/rancher/rke2/server/logs/audit.log
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+```
+kube-apiserver-arg:
+ - "audit-log-path=/var/log/rke2/audit.log"
+```
+
+
+#### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxage' is greater or equal to 30
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxage argument to 30 days.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxage parameter to an appropriate number of days, for example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxage=40"
+```
+
+
+#### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxbackup' is greater or equal to 10
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxbackup argument to 10.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to an appropriate value.
+For example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxbackup=15"
+```
+
+
+#### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxsize' is greater or equal to 100
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxsize argument to 100 MB.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxsize=150"
+```
+
+
+#### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--request-timeout' is not present OR '--request-timeout' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Permissive, per CIS guidelines,
+"it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed".
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml
+and set the below parameter if needed. For example,
+```
+kube-apiserver-arg:
+ - "request-timeout=300s"
+```
+
+
+#### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--service-account-lookup' is not present OR '--service-account-lookup' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --service-account-lookup argument.
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml and set the service-account-lookup. For example,
+```
+kube-apiserver-arg:
+ - "service-account-lookup=true"
+```
+Alternatively, you can delete the service-account-lookup parameter from this file so
+that the default takes effect.
+
+
+#### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--service-account-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 automatically generates and sets the service account key file.
+It is located at /var/lib/rancher/rke2/server/tls/service.key.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "service-account-key-file="
+```
+
+
+#### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--etcd-certfile' is present AND '--etcd-keyfile' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 automatically generates and sets the etcd certificate and key files.
+They are located at /var/lib/rancher/rke2/server/tls/etcd/client.crt and /var/lib/rancher/rke2/server/tls/etcd/client.key.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "etcd-certfile="
+ - "etcd-keyfile="
+```
+
+
+#### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--tls-cert-file' is present AND '--tls-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically generates and provides the TLS certificate and private key for the apiserver.
+They are generated and located at /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt and /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "tls-cert-file="
+ - "tls-private-key-file="
+```
+
+
+#### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--client-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the client certificate authority file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/client-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "client-ca-file="
+```
+
+
+#### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--etcd-cafile' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the etcd certificate authority file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/client-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "etcd-cafile="
+```
+
+
+#### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--encryption-provider-config' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 always is configured to encrypt secrets.
+Secrets encryption is managed with the rke2 secrets-encrypt command line tool.
+If needed, you can find the generated encryption config at /var/lib/rancher/rke2/server/cred/encryption-config.json
+
+
+#### 1.2.30 Ensure that encryption providers are appropriately configured (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
+if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -o 'providers\"\:\[.*\]' $ENCRYPTION_PROVIDER_CONFIG | grep -o "[A-Za-z]*" | head -2 | tail -1 | sed 's/^/provider=/'; fi
+```
+
+**Expected Result:** 'provider' contains valid elements from 'aescbc,kms,secretbox'
+
+
+Returned Value:
+
+```console
+provider=aescbc
+```
+
+
+
+Remediation:
+
+RKE2 always is configured to use the aescbc encryption provider to encrypt secrets.
+Secrets encryption is managed with the rke2 secrets-encrypt command line tool.
+If needed, you can find the generated encryption config at /var/lib/rancher/rke2/server/cred/encryption-config.json
+
+
+#### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, the RKE2 kube-apiserver complies with this test. Changes to these values may cause regression, therefore ensure that all apiserver clients support the new TLS configuration before applying it in production deployments.
+If a custom TLS configuration is required, consider also creating a custom version of this rule that aligns with your requirements.
+If this check fails, remove any custom configuration around `tls-cipher-suites` or update the /etc/rancher/rke2/config.yaml file to match the default by adding the following:
+kube-apiserver-arg:
+- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
+
+
+### 1.3 Controller Manager
+
+#### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--terminated-pod-gc-threshold' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets a terminated-pod-gc-threshold of 1000.
+If you need to change this value, edit the RKE2 config file /etc/rancher/rke2/config.yaml on the control plane node
+and set the --terminated-pod-gc-threshold to an appropriate threshold,
+```
+kube-controller-manager-arg:
+ - "terminated-pod-gc-threshold=10"
+```
+
+
+#### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "profiling=true"
+```
+
+
+#### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--use-service-account-credentials' is not equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --use-service-account-credentials argument to true.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "use-service-account-credentials=false"
+```
+
+
+#### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--service-account-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the service account private key file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/service.current.key.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "service-account-private-key-file="
+```
+
+
+#### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--root-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the root CA file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/server-ca.crt.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "root-ca-file="
+```
+
+
+#### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--feature-gates' does not have 'RotateKubeletServerCertificate=false' OR '--feature-gates' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+
+
+#### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --bind-address argument to 127.0.0.1
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "bind-address="
+```
+
+
+### 1.4 Scheduler
+
+#### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-scheduler
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2666 2555 0 18:52 ? 00:00:01 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-scheduler-arg:
+ - "profiling=true"
+```
+
+
+#### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-scheduler
+```
+
+**Expected Result:** '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2666 2555 0 18:52 ? 00:00:01 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --bind-address argument to 127.0.0.1
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-scheduler-arg:
+ - "bind-address="
+```
+
+
+## 2 Etcd Node Configuration
+
+#### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+
+```
+
+**Expected Result:** '.client-transport-security.cert-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/server-client.crt' AND '.client-transport-security.key-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/server-client.key'
+
+
+Returned Value:
+
+```console
+advertise-client-urls: https://10.10.10.100:2379
+client-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
+data-dir: /var/lib/rancher/rke2/server/db/etcd
+election-timeout: 5000
+experimental-initial-corrupt-check: true
+heartbeat-interval: 500
+initial-advertise-peer-urls: https://10.10.10.100:2380
+initial-cluster: server-0-43eb3aad=https://10.10.10.100:2380
+initial-cluster-state: new
+listen-client-http-urls: https://127.0.0.1:2382
+listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
+listen-metrics-urls: http://127.0.0.1:2381
+listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
+log-outputs:
+- stderr
+logger: zap
+name: server-0-43eb3aad
+peer-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt
+snapshot-count: 10000
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates cert and key files for etcd.
+These are located in /var/lib/rancher/rke2/server/tls/etcd/.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use custom cert and key files.
+
+
+#### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_CLIENT_CERT_AUTH' is present OR '.client-transport-security.client-cert-auth' is equal to 'true'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=89670f57382642704d89307891be3ba62144d3f1018d38ea0db9e4bb32c9a61c
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --client-cert-auth parameter to true.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to disable client certificate authentication.
+
+
+#### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present OR '.client-transport-security.auto-tls' is present
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=89670f57382642704d89307891be3ba62144d3f1018d38ea0db9e4bb32c9a61c
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --auto-tls parameter.
+If this check fails, edit the etcd pod specification file /var/lib/rancher/rke2/server/db/etcd/config on the master
+node and either remove the --auto-tls parameter or set it to false.
+client-transport-security:
+ auto-tls: false
+
+
+#### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+
+```
+
+**Expected Result:** '.peer-transport-security.cert-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt' AND '.peer-transport-security.key-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key'
+
+
+Returned Value:
+
+```console
+advertise-client-urls: https://10.10.10.100:2379
+client-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
+data-dir: /var/lib/rancher/rke2/server/db/etcd
+election-timeout: 5000
+experimental-initial-corrupt-check: true
+heartbeat-interval: 500
+initial-advertise-peer-urls: https://10.10.10.100:2380
+initial-cluster: server-0-43eb3aad=https://10.10.10.100:2380
+initial-cluster-state: new
+listen-client-http-urls: https://127.0.0.1:2382
+listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
+listen-metrics-urls: http://127.0.0.1:2381
+listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
+log-outputs:
+- stderr
+logger: zap
+name: server-0-43eb3aad
+peer-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt
+snapshot-count: 10000
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates peer cert and key files for etcd.
+These are located in /var/lib/rancher/rke2/server/tls/etcd/.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use custom peer cert and key files.
+
+
+#### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_PEER_CLIENT_CERT_AUTH' is present OR '.peer-transport-security.client-cert-auth' is equal to 'true'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=89670f57382642704d89307891be3ba62144d3f1018d38ea0db9e4bb32c9a61c
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --peer-cert-auth parameter to true.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to disable peer client certificate authentication.
+
+
+#### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present OR '.peer-transport-security.auto-tls' is present
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=89670f57382642704d89307891be3ba62144d3f1018d38ea0db9e4bb32c9a61c
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --peer-auto-tls parameter.
+If this check fails, edit the etcd pod specification file /var/lib/rancher/rke2/server/db/etcd/config on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+peer-transport-security:
+ auto-tls: false
+
+
+#### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_TRUSTED_CA_FILE' is present OR '.peer-transport-security.trusted-ca-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=89670f57382642704d89307891be3ba62144d3f1018d38ea0db9e4bb32c9a61c
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates a unique certificate authority for etcd.
+This is located at /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use a shared certificate authority.
+
+
+## 3 Control Plane Configuration
+
+### 3.1 Authentication and Authorization
+
+#### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+#### 3.1.2 Service account token authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of service account tokens.
+
+#### 3.1.3 Bootstrap token authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of bootstrap tokens.
+
+### 3.2 Logging
+
+#### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result:** '--audit-policy-file' is present
+
+
+Returned Value:
+
+```console
+root 2511 2457 9 18:52 ? 00:00:23 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+root 2655 2552 1 18:52 ? 00:00:04 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+Create an audit policy file for your cluster.
+
+
+#### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4 Worker Node Security Configuration
+
+### 4.1 Worker Node Configuration Files
+
+#### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig`
+
+
+#### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig`
+
+
+#### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig`
+
+
+#### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig`
+
+
+#### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/client-ca.crt; then stat -c permissions=%a /var/lib/rancher/rke2/agent/client-ca.crt; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/client-ca.crt`
+
+
+#### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/client-ca.crt; then stat -c %U:%G /var/lib/rancher/rke2/agent/client-ca.crt; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the following command to modify the ownership of the --client-ca-file.
+`chown root:root /var/lib/rancher/rke2/agent/client-ca.crt`
+
+
+#### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
+
+### 4.2 Kubelet
+
+#### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--anonymous-auth' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --anonymous-auth to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "anonymous-auth=true"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--authorization-mode' does not have 'AlwaysAllow'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --authorization-mode to AlwaysAllow.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "authorization-mode=AlwaysAllow"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--client-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the client ca certificate for the Kubelet.
+It is generated and located at /var/lib/rancher/rke2/agent/client-ca.crt
+
+
+#### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--read-only-port' is equal to '0' OR '--read-only-port' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --read-only-port to 0. If you have set this to a different value, you
+should set it back to 0. Edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "read-only-port=XXXX"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--streaming-connection-idle-timeout' is present OR '--streaming-connection-idle-timeout' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "streaming-connection-idle-timeout=5m"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--make-iptables-util-chains' is present OR '--make-iptables-util-chains' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter.
+```
+kubelet-arg:
+ - "make-iptables-util-chains=true"
+```
+Or, remove the --make-iptables-util-chains argument to let RKE2 use the default value.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.7 Ensure that the --hostname-override argument is not set (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+By default, RKE2 does set the --hostname-override argument. Per CIS guidelines, this is to comply
+with cloud providers that require this flag to ensure that hostname matches node names.
+
+#### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--event-qps' is present OR '--event-qps' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "event-qps="
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--tls-cert-file' is present AND '--tls-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the TLS certificate and private key for the Kubelet.
+They are generated and located at /var/lib/rancher/rke2/agent/serving-kubelet.crt and /var/lib/rancher/rke2/agent/serving-kubelet.key
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines similar to below.
+```
+kubelet-arg:
+ - "tls-cert-file="
+ - "tls-private-key-file="
+```
+
+
+#### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--rotate-certificates' is present OR '--rotate-certificates' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --rotate-certificates argument.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any rotate-certificates parameter.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** 'RotateKubeletServerCertificate' is present OR 'RotateKubeletServerCertificate' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2292 2256 2 18:52 ? 00:00:07 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the RotateKubeletServerCertificate feature gate.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any RotateKubeletServerCertificate parameter.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+```
+kubelet-arg:
+ - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
+```
+or to a subset of these values.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+#### 4.2.13 Ensure that a limit is set on pod PIDs (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "pod-max-pids="
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+## 5 Kubernetes Policies
+
+### 5.1 RBAC and Service Accounts
+
+#### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+#### 5.1.2 Minimize access to secrets (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+#### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+#### 5.1.4 Minimize access to create pods (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+#### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+#### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+#### 5.1.7 Avoid use of system:masters group (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+#### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+#### 5.1.9 Minimize access to create persistent volumes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove create access to PersistentVolume objects in the cluster.
+
+#### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the proxy sub-resource of node objects.
+
+#### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
+
+#### 5.1.12 Minimize access to webhook configuration objects (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
+
+#### 5.1.13 Minimize access to the service account token creation (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the token sub-resource of serviceaccount objects.
+
+### 5.2 Pod Security Standards
+
+#### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+#### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+#### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+#### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+#### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+#### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+#### 5.2.7 Minimize the admission of root containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+#### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+#### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+#### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Review the use of capabilities in applications running on your cluster. Where a namespace
+contains applications which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+#### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+#### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+#### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+### 5.3 Network Policies and CNI
+
+#### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+#### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+### 5.4 Secrets Management
+
+#### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+#### 5.4.2 Consider external secret storage (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+### 5.5 Extensible Admission Control
+
+#### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+### 5.7 General Policies
+
+#### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+#### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+#### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+#### 5.7.4 The default namespace should not be used (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/docs/security/cis_self_assessment18.md b/docs/security/cis_self_assessment18.md
new file mode 100644
index 00000000..ac5e5004
--- /dev/null
+++ b/docs/security/cis_self_assessment18.md
@@ -0,0 +1,2932 @@
+---
+title: CIS 1.8 Self-Assessment Guide
+---
+
+## Overview
+
+This document is a companion to the RKE2 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of RKE2, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by RKE2 operators, security teams, auditors, and decision makers.
+
+This guide is specific to the **v1.26-1.31** release line of RKE2 and the **v1.8** release of the CIS Kubernetes Benchmark.
+
+For more information about each control, including detailed rationales and descriptions checks, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.8. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
+
+### Testing controls methodology
+
+Each control in the CIS Kubernetes Benchmark was evaluated against a RKE2 cluster that was configured according to the accompanying hardening guide.
+
+These are the possible results for each control:
+
+- **PASS** - The control is automated (scored: true). The RKE2 cluster under test passed the audit outlined in the benchmark.
+- **Not Applicable** - The control is not applicable (type: skip) to RKE2 because of how it is designed to operate. The rationale section will explain why this is so.
+- **WARN** - The control is manual (scored: false) in the CIS benchmark and depends on the manual operator intervention. The remediation section will provide guidance on how to achieve a PASS result.
+
+## 1 Control Plane Security Configuration
+
+### 1.1 Control Plane Node Configuration Files
+
+#### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml`
+
+
+#### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml`
+
+
+#### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml`
+
+
+#### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml`
+
+
+#### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml`
+
+
+#### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml`
+
+
+#### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml`
+
+
+#### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml`
+
+
+#### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Note that for many CNIs, a lock file is created with permissions 750. This is expected and can be ignored.
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/cni/networks/ and chmod 600 /etc/cni/net.d/`
+
+#### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
+find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root `
+
+
+#### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result:** permissions has permissions 700, expected 700 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=700
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+`chmod 700 /var/lib/rancher/rke2/server/db/etcd`
+
+
+#### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result:** 'etcd:etcd' is present
+
+
+Returned Value:
+
+```console
+etcd:etcd
+```
+
+
+
+Remediation:
+
+If running master only with no etcd role, this check is Not applicable.
+If controlplane and etcd roles are present on the same nodes but this check is warn then
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, `chown etcd:etcd /var/lib/rancher/rke2/server/db/etcd`
+
+
+#### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chmod 600 /var/lib/rancher/rke2/server/cred/admin.kubeconfig`
+
+
+#### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example, `chown root:root /var/lib/rancher/rke2/server/cred/admin.kubeconfig`
+
+
+#### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig`
+
+
+#### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig`
+
+
+#### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/server/cred/controller.kubeconfig`
+
+
+#### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig`
+
+
+#### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/tls
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chown -R root:root /var/lib/rancher/rke2/server/tls`
+
+
+#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod -R 600 /var/lib/rancher/rke2/server/tls/*.crt`
+
+#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+`chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key`
+
+
+### 1.2 API Server
+
+#### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--anonymous-auth' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --anonymous-auth argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove anything similar to below.
+```
+kube-apiserver-arg:
+ - "anonymous-auth=true"
+```
+
+
+#### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--token-auth-file' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Follow the documentation and configure alternate mechanisms for authentication.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove anything similar to below.
+```
+kube-apiserver-arg:
+ - "token-auth-file="
+```
+
+
+#### 1.2.3 Ensure that the --DenyServiceExternalIPs is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+By default, RKE2 does not set DenyServiceExternalIPs.
+To enable this flag, edit the RKE2 config file /etc/rancher/rke2/config.yaml like below.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=DenyServiceExternalIPs"
+```
+
+#### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the kubelet client certificate and key.
+They are generated and located at /var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt and /var/lib/rancher/rke2/server/tls/client-kube-apiserver.key
+If for some reason you need to provide your own certificate and key, you can set the
+below parameters in the RKE2 config file /etc/rancher/rke2/config.yaml.
+```
+kube-apiserver-arg:
+ - "kubelet-client-certificate="
+ - "kubelet-client-key="
+```
+
+
+#### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--kubelet-certificate-authority' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the kubelet CA cert file, at /var/lib/rancher/rke2/server/tls/server-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "kubelet-certificate-authority="
+```
+
+
+#### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' does not have 'AlwaysAllow'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --authorization-mode to AlwaysAllow.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "authorization-mode=AlwaysAllow"
+```
+
+
+#### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' has 'Node'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --authorization-mode to Node and RBAC.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+ensure that you are not overriding authorization-mode.
+
+
+#### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--authorization-mode' has 'RBAC'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --authorization-mode to Node and RBAC.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+ensure that you are not overriding authorization-mode.
+
+
+#### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the RKE2 config file /etc/rancher/rke2/config.yaml and set the below parameters.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=...,EventRateLimit,..."
+ - "admission-control-config-file="
+```
+
+#### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --enable-admission-plugins to AlwaysAdmit.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=AlwaysAdmit"
+```
+
+
+#### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Permissive, per CIS guidelines,
+"This setting could impact offline or isolated clusters, which have images pre-loaded and
+do not have access to a registry to pull in-use images. This setting is not appropriate for
+clusters which use this configuration."
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+#### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+Enabling Pod Security Policy is no longer supported on RKE2 v1.25+ and will cause applications to unexpectedly fail.
+
+#### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --disable-admission-plugins to anything.
+Follow the documentation and create ServiceAccount objects as per your environment.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "disable-admission-plugins=ServiceAccount"
+```
+
+
+#### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --disable-admission-plugins to anything.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "disable-admission-plugins=...,NamespaceLifecycle,..."
+```
+
+
+#### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--enable-admission-plugins' has 'NodeRestriction'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --enable-admission-plugins to NodeRestriction.
+Check the RKE2 config file /etc/rancher/rke2/config.yaml, and ensure that you are not overriding the admission plugins.
+If you are, include NodeRestriction in the list.
+```
+kube-apiserver-arg:
+ - "enable-admission-plugins=...,NodeRestriction,..."
+```
+
+
+#### 1.2.16 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "profiling=true"
+```
+
+
+#### 1.2.17 Ensure that the --audit-log-path argument is set (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-path' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-path argument to /var/lib/rancher/rke2/server/logs/audit.log
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+```
+kube-apiserver-arg:
+ - "audit-log-path=/var/log/rke2/audit.log"
+```
+
+
+#### 1.2.18 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxage' is greater or equal to 30
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxage argument to 30 days.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxage parameter to an appropriate number of days, for example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxage=40"
+```
+
+
+#### 1.2.19 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxbackup' is greater or equal to 10
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxbackup argument to 10.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to an appropriate value.
+For example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxbackup=15"
+```
+
+
+#### 1.2.20 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--audit-log-maxsize' is greater or equal to 100
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --audit-log-maxsize argument to 100 MB.
+If you want to change this, edit the RKE2 config file /etc/rancher/rke2/config.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example,
+```
+kube-apiserver-arg:
+ - "audit-log-maxsize=150"
+```
+
+
+#### 1.2.21 Ensure that the --request-timeout argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--request-timeout' is not present OR '--request-timeout' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+Permissive, per CIS guidelines,
+"it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed".
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml
+and set the below parameter if needed. For example,
+```
+kube-apiserver-arg:
+ - "request-timeout=300s"
+```
+
+
+#### 1.2.22 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--service-account-lookup' is not present OR '--service-account-lookup' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --service-account-lookup argument.
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml and set the service-account-lookup. For example,
+```
+kube-apiserver-arg:
+ - "service-account-lookup=true"
+```
+Alternatively, you can delete the service-account-lookup parameter from this file so
+that the default takes effect.
+
+
+#### 1.2.23 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--service-account-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 automatically generates and sets the service account key file.
+It is located at /var/lib/rancher/rke2/server/tls/service.key.
+If this check fails, edit RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "service-account-key-file="
+```
+
+
+#### 1.2.24 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--etcd-certfile' is present AND '--etcd-keyfile' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 automatically generates and sets the etcd certificate and key files.
+They are located at /var/lib/rancher/rke2/server/tls/etcd/client.crt and /var/lib/rancher/rke2/server/tls/etcd/client.key.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "etcd-certfile="
+ - "etcd-keyfile="
+```
+
+
+#### 1.2.25 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--tls-cert-file' is present AND '--tls-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically generates and provides the TLS certificate and private key for the apiserver.
+They are generated and located at /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt and /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "tls-cert-file="
+ - "tls-private-key-file="
+```
+
+
+#### 1.2.26 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--client-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the client certificate authority file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/client-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "client-ca-file="
+```
+
+
+#### 1.2.27 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--etcd-cafile' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the etcd certificate authority file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/client-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-apiserver-arg:
+ - "etcd-cafile="
+```
+
+
+#### 1.2.28 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--encryption-provider-config' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+RKE2 always is configured to encrypt secrets.
+Secrets encryption is managed with the rke2 secrets-encrypt command line tool.
+If needed, you can find the generated encryption config at /var/lib/rancher/rke2/server/cred/encryption-config.json
+
+
+#### 1.2.29 Ensure that encryption providers are appropriately configured (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
+if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -o 'providers\"\:\[.*\]' $ENCRYPTION_PROVIDER_CONFIG | grep -o "[A-Za-z]*" | head -2 | tail -1 | sed 's/^/provider=/'; fi
+```
+
+**Expected Result:** 'provider' contains valid elements from 'aescbc,kms,secretbox'
+
+
+Returned Value:
+
+```console
+provider=aescbc
+```
+
+
+
+Remediation:
+
+RKE2 always is configured to use the aescbc encryption provider to encrypt secrets.
+Secrets encryption is managed with the rke2 secrets-encrypt command line tool.
+If needed, you can find the generated encryption config at /var/lib/rancher/rke2/server/cred/encryption-config.json
+
+
+#### 1.2.30 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-apiserver
+```
+
+**Expected Result:** '--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+```
+
+
+
+Remediation:
+
+By default, the RKE2 kube-apiserver complies with this test. Changes to these values may cause regression, therefore ensure that all apiserver clients support the new TLS configuration before applying it in production deployments.
+If a custom TLS configuration is required, consider also creating a custom version of this rule that aligns with your requirements.
+If this check fails, remove any custom configuration around `tls-cipher-suites` or update the /etc/rancher/rke2/config.yaml file to match the default by adding the following:
+kube-apiserver-arg:
+- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
+
+
+### 1.3 Controller Manager
+
+#### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--terminated-pod-gc-threshold' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets a terminated-pod-gc-threshold of 1000.
+If you need to change this value, edit the RKE2 config file /etc/rancher/rke2/config.yaml on the control plane node
+and set the --terminated-pod-gc-threshold to an appropriate threshold,
+```
+kube-controller-manager-arg:
+ - "terminated-pod-gc-threshold=10"
+```
+
+
+#### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "profiling=true"
+```
+
+
+#### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--use-service-account-credentials' is not equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --use-service-account-credentials argument to true.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "use-service-account-credentials=false"
+```
+
+
+#### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--service-account-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the service account private key file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/service.current.key.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "service-account-private-key-file="
+```
+
+
+#### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--root-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the root CA file.
+It is generated and located at /var/lib/rancher/rke2/server/tls/server-ca.crt.
+If for some reason you need to provide your own ca certificate, look at using the rke2 certificate command line tool.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "root-ca-file="
+```
+
+
+#### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--feature-gates' is present OR '--feature-gates' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the RotateKubeletServerCertificate feature gate.
+If you have enabled this feature gate, you should remove it.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "feature-gate=RotateKubeletServerCertificate"
+```
+
+
+#### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-controller-manager
+```
+
+**Expected Result:** '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2696 2604 1 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --bind-address argument to 127.0.0.1
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-controller-manager-arg:
+ - "bind-address="
+```
+
+
+### 1.4 Scheduler
+
+#### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-scheduler
+```
+
+**Expected Result:** '--profiling' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2687 2578 0 19:13 ? 00:00:00 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --profiling argument to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-scheduler-arg:
+ - "profiling=true"
+```
+
+
+#### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kube-scheduler
+```
+
+**Expected Result:** '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2687 2578 0 19:13 ? 00:00:00 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --bind-address argument to 127.0.0.1
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines like below.
+```
+kube-scheduler-arg:
+ - "bind-address="
+```
+
+
+## 2 Etcd Node Configuration
+
+#### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+
+```
+
+**Expected Result:** '.client-transport-security.cert-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/server-client.crt' AND '.client-transport-security.key-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/server-client.key'
+
+
+Returned Value:
+
+```console
+advertise-client-urls: https://10.10.10.100:2379
+client-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
+data-dir: /var/lib/rancher/rke2/server/db/etcd
+election-timeout: 5000
+experimental-initial-corrupt-check: true
+experimental-watch-progress-notify-interval: 5000000000
+heartbeat-interval: 500
+initial-advertise-peer-urls: https://10.10.10.100:2380
+initial-cluster: server-0-51be1a67=https://10.10.10.100:2380
+initial-cluster-state: new
+listen-client-http-urls: https://127.0.0.1:2382
+listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
+listen-metrics-urls: http://127.0.0.1:2381
+listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
+log-outputs:
+- stderr
+logger: zap
+name: server-0-51be1a67
+peer-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt
+snapshot-count: 10000
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates cert and key files for etcd.
+These are located in /var/lib/rancher/rke2/server/tls/etcd/.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use custom cert and key files.
+
+
+#### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_CLIENT_CERT_AUTH' is present OR '.client-transport-security.client-cert-auth' is equal to 'true'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=823d332f37f2561c46e6bfa6a2b473a20a30d7ca1cd0c2a2544190d820598d16
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --client-cert-auth parameter to true.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to disable client certificate authentication.
+
+
+#### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present OR '.client-transport-security.auto-tls' is present
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=823d332f37f2561c46e6bfa6a2b473a20a30d7ca1cd0c2a2544190d820598d16
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --auto-tls parameter.
+If this check fails, edit the etcd pod specification file /var/lib/rancher/rke2/server/db/etcd/config on the master
+node and either remove the --auto-tls parameter or set it to false.
+client-transport-security:
+ auto-tls: false
+
+
+#### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+
+```
+
+**Expected Result:** '.peer-transport-security.cert-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt' AND '.peer-transport-security.key-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key'
+
+
+Returned Value:
+
+```console
+advertise-client-urls: https://10.10.10.100:2379
+client-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
+data-dir: /var/lib/rancher/rke2/server/db/etcd
+election-timeout: 5000
+experimental-initial-corrupt-check: true
+experimental-watch-progress-notify-interval: 5000000000
+heartbeat-interval: 500
+initial-advertise-peer-urls: https://10.10.10.100:2380
+initial-cluster: server-0-51be1a67=https://10.10.10.100:2380
+initial-cluster-state: new
+listen-client-http-urls: https://127.0.0.1:2382
+listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
+listen-metrics-urls: http://127.0.0.1:2381
+listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
+log-outputs:
+- stderr
+logger: zap
+name: server-0-51be1a67
+peer-transport-security:
+ cert-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.crt
+ client-cert-auth: true
+ key-file: /var/lib/rancher/rke2/server/tls/etcd/peer-server-client.key
+ trusted-ca-file: /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt
+snapshot-count: 10000
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates peer cert and key files for etcd.
+These are located in /var/lib/rancher/rke2/server/tls/etcd/.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use custom peer cert and key files.
+
+
+#### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_PEER_CLIENT_CERT_AUTH' is present OR '.peer-transport-security.client-cert-auth' is equal to 'true'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=823d332f37f2561c46e6bfa6a2b473a20a30d7ca1cd0c2a2544190d820598d16
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --peer-cert-auth parameter to true.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to disable peer client certificate authentication.
+
+
+#### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present OR '.peer-transport-security.auto-tls' is present
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=823d332f37f2561c46e6bfa6a2b473a20a30d7ca1cd0c2a2544190d820598d16
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --peer-auto-tls parameter.
+If this check fails, edit the etcd pod specification file /var/lib/rancher/rke2/server/db/etcd/config on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+peer-transport-security:
+ auto-tls: false
+
+
+#### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC etcd
+```
+
+**Expected Result:** 'ETCD_TRUSTED_CA_FILE' is present OR '.peer-transport-security.trusted-ca-file' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
+
+
+Returned Value:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+HOSTNAME=server-0
+ETCD_UNSUPPORTED_ARCH=
+FILE_HASH=823d332f37f2561c46e6bfa6a2b473a20a30d7ca1cd0c2a2544190d820598d16
+NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
+HOME=/
+```
+
+
+
+Remediation:
+
+By default, RKE2 generates a unique certificate authority for etcd.
+This is located at /var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt.
+If this check fails, ensure that the configuration file /var/lib/rancher/rke2/server/db/etcd/config
+has not been modified to use a shared certificate authority.
+
+
+## 3 Control Plane Configuration
+
+### 3.1 Authentication and Authorization
+
+#### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+#### 3.1.2 Service account token authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of service account tokens.
+
+#### 3.1.3 Bootstrap token authentication should not be used for users (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of bootstrap tokens.
+
+### 3.2 Logging
+
+#### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result:** '--audit-policy-file' is present
+
+
+Returned Value:
+
+```console
+root 2550 2500 12 19:13 ? 00:00:15 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --advertise-address=10.10.10.100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
+root 2696 2604 2 19:13 ? 00:00:02 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+
+
+Remediation:
+
+Create an audit policy file for your cluster.
+
+
+#### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4 Worker Node Security Configuration
+
+### 4.1 Worker Node Configuration Files
+
+#### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig`
+
+
+#### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result:** 'root:root' is present
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example, `chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig`
+
+
+#### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig`
+
+
+#### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig`
+
+
+#### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/client-ca.crt; then stat -c permissions=%a /var/lib/rancher/rke2/agent/client-ca.crt; fi'
+```
+
+**Expected Result:** permissions has permissions 600, expected 600 or more restrictive
+
+
+Returned Value:
+
+```console
+permissions=600
+```
+
+
+
+Remediation:
+
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+`chmod 600 /var/lib/rancher/rke2/agent/client-ca.crt`
+
+
+#### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/client-ca.crt; then stat -c %U:%G /var/lib/rancher/rke2/agent/client-ca.crt; fi'
+```
+
+**Expected Result:** 'root:root' is equal to 'root:root'
+
+
+Returned Value:
+
+```console
+root:root
+```
+
+
+
+Remediation:
+
+Run the following command to modify the ownership of the --client-ca-file.
+`chown root:root /var/lib/rancher/rke2/agent/client-ca.crt`
+
+
+#### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
+
+#### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+The kubelet is managed by the RKE2 process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
+
+### 4.2 Kubelet
+
+#### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--anonymous-auth' is equal to 'false'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --anonymous-auth to false.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "anonymous-auth=true"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--authorization-mode' does not have 'AlwaysAllow'
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --authorization-mode to AlwaysAllow.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "authorization-mode=AlwaysAllow"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--client-ca-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the client ca certificate for the Kubelet.
+It is generated and located at /var/lib/rancher/rke2/agent/client-ca.crt
+
+
+#### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--read-only-port' is equal to '0' OR '--read-only-port' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 sets the --read-only-port to 0. If you have set this to a different value, you
+should set it back to 0. Edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any lines similar to below.
+```
+kubelet-arg:
+ - "read-only-port=XXXX"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--streaming-connection-idle-timeout' is present OR '--streaming-connection-idle-timeout' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "streaming-connection-idle-timeout=5m"
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--make-iptables-util-chains' is present OR '--make-iptables-util-chains' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter.
+```
+kubelet-arg:
+ - "make-iptables-util-chains=true"
+```
+Or, remove the --make-iptables-util-chains argument to let RKE2 use the default value.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.7 Ensure that the --hostname-override argument is not set (Automated)
+
+**Result:** Not Applicable
+
+**Rationale:**
+
+By default, RKE2 does set the --hostname-override argument. Per CIS guidelines, this is to comply
+with cloud providers that require this flag to ensure that hostname matches node names.
+
+#### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--event-qps' is present OR '--event-qps' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "event-qps="
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--tls-cert-file' is present AND '--tls-private-key-file' is present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 automatically provides the TLS certificate and private key for the Kubelet.
+They are generated and located at /var/lib/rancher/rke2/agent/serving-kubelet.crt and /var/lib/rancher/rke2/agent/serving-kubelet.key
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml and remove any lines similar to below.
+```
+kubelet-arg:
+ - "tls-cert-file="
+ - "tls-private-key-file="
+```
+
+
+#### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** '--rotate-certificates' is present OR '--rotate-certificates' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the --rotate-certificates argument.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any rotate-certificates parameter.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+**Result:** PASS
+
+**Audit:**
+```bash
+/bin/ps -fC kubelet
+```
+
+**Expected Result:** 'RotateKubeletServerCertificate' is present OR 'RotateKubeletServerCertificate' is not present
+
+
+Returned Value:
+
+```console
+UID PID PPID C STIME TTY TIME CMD
+root 2281 2254 2 19:13 ? 00:00:04 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+
+
+Remediation:
+
+By default, RKE2 does not set the RotateKubeletServerCertificate feature gate.
+If this check fails, edit the RKE2 config file /etc/rancher/rke2/config.yaml, remove any RotateKubeletServerCertificate parameter.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+
+#### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml,
+```
+kubelet-arg:
+ - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
+```
+or to a subset of these values.
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+#### 4.2.13 Ensure that a limit is set on pod PIDs (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Edit the RKE2 config file /etc/rancher/rke2/config.yaml, set the following parameter to an appropriate value.
+```
+kubelet-arg:
+ - "pod-max-pids="
+```
+Based on your system, restart the RKE2 service. For example,
+systemctl restart rke2-server.service
+
+## 5 Kubernetes Policies
+
+### 5.1 RBAC and Service Accounts
+
+#### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+#### 5.1.2 Minimize access to secrets (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+#### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+#### 5.1.4 Minimize access to create pods (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+#### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+#### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+#### 5.1.7 Avoid use of system:masters group (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+#### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+#### 5.1.9 Minimize access to create persistent volumes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove create access to PersistentVolume objects in the cluster.
+
+#### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the proxy sub-resource of node objects.
+
+#### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
+
+#### 5.1.12 Minimize access to webhook configuration objects (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
+
+#### 5.1.13 Minimize access to the service account token creation (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Where possible, remove access to the token sub-resource of serviceaccount objects.
+
+### 5.2 Pod Security Standards
+
+#### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+#### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+#### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+#### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+#### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+#### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+#### 5.2.7 Minimize the admission of root containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+#### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+#### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+#### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Review the use of capabilities in applications running on your cluster. Where a namespace
+contains applications which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+#### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+#### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+#### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+### 5.3 Network Policies and CNI
+
+#### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+#### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+### 5.4 Secrets Management
+
+#### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+#### 5.4.2 Consider external secret storage (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+### 5.5 Extensible Admission Control
+
+#### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+### 5.7 General Policies
+
+#### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+#### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+#### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+#### 5.7.4 The default namespace should not be used (Manual)
+
+**Result:** WARN
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/docs/security/hardening_guide.md b/docs/security/hardening_guide.md
index 287f2251..8151eac2 100644
--- a/docs/security/hardening_guide.md
+++ b/docs/security/hardening_guide.md
@@ -4,7 +4,10 @@ title: CIS Hardening Guide
This document provides prescriptive guidance for hardening a production installation of RKE2. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Internet Security (CIS).
-For more details about evaluating a hardened cluster against the official CIS benchmark, refer to the CIS Benchmark [Self-Assessment Guide v1.23](cis_self_assessment123.md), or [Self-Assessment Guide v1.6](cis_self_assessment16.md) for RKE2 versions prior to v1.25.
+For more details about evaluating a hardened cluster against the official CIS benchmark, refer to the appropiate CIS Self-Assessment Guide:
+- [CIS Self-Assessment Guide v1.8](cis_self_assessment18.md) for RKE2 v1.26 and newer
+- [CIS Self-Assessment Guide v1.7](cis_self_assessment17.md) for RKE2 v1.25
+- [CIS Self-Assessment Guide v1.24](cis_self_assessment124.md) for RKE2 v1.24 and older
RKE2 is designed to be "hardened by default" and pass the majority of the Kubernetes CIS controls without modification. There are a few notable exceptions to this that require manual intervention to fully pass the CIS Benchmark:
@@ -86,35 +89,27 @@ Available with October 2023 releases (v1.25.15+rke2r1, v1.26.10+rke2r1, v1.27.7+
profile: "cis"
```
-Using the generic `cis` profile will ensure that the cluster passes the CIS benchmark (rke2-cis-1.XX-profile-hardened) associated with the Kubernetes version that RKE2 is running. For example, RKE2 v1.28.XX with the `profile: cis` will pass the `rke2-cis-1.7-profile-hardened` in Rancher.
+Using the generic `cis` profile will ensure that the cluster passes the CIS benchmark (rke2-cis-1.XX-profile-hardened) associated with the Kubernetes version that RKE2 is running. For example, RKE2 v1.26.XX with the `profile: cis` will pass the `rke2-cis-1.8-profile-hardened` in Rancher.
Use of the generic `cis` profile ensures that upgrades to RKE2 do not require a change to existing configuration. Whatever changes are necessary to pass applicable CIS benchmark will be automatically applied.
A rough mapping of RKE2 versions to CIS benchmark versions is as follows:
-| CIS Benchmark | Applicable RKE2 Minors | Profile Flag |
+| RKE2 Minors | Applicable CIS Benchmark | Profile Flag |
| - | - | - |
-| 1.5 | 1.15-1.18 | `cis-1.5` |
-| 1.6 | 1.19-1.22 | `cis-1.6` |
-| 1.23 | 1.23 | `cis-1.23` |
+| 1.27+ | 1.8 | `cis` |
+| 1.26 | 1.8 | `cis-1.23`, `cis` |
+| 1.25 | 1.7 | `cis-1.23`, `cis` |
| 1.24 | 1.24 | `cis-1.23` |
-| 1.7 | 1.25-1.28 | `cis-1.23`, `cis` |
-| 1.8 | 1.29+ | `cis` |
-
-### CIS v1.23 configuration
-For older versions of 1.25 and 1.26, the `cis-1.23` profile is still available. This profile will ensure that the cluster passes the CIS v1.7 benchmark (rke2-cis-1.7-profile-hardened) available in Rancher.
-
-```yaml
-profile: "cis-1.23"
-```
+| 1.23 | 1.23 | `cis-1.23` |
+| 1.19-1.22 | 1.6 | `cis-1.6` |
+| 1.15-1.18 | 1.5 | `cis-1.5` |
-Below is the minimum necessary configuration needed for hardening RKE2 to pass CIS v1.6 hardened profile `rke2-cis-1.6-profile-hardened` available in Rancher.
-
```yaml
-profile: "cis-1.6" # CIS 4.2.6, 5.2.1, 5.2.8, 5.2.9, 5.3.2
+profile: "cis-1.6"
```
@@ -301,20 +296,6 @@ The `default` service account should be configured such that it does not provide
This can be remediated by updating the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace.
-**Remediation**
-You can manually update this field on service accounts in your cluster to pass the control as described [above](#configure-default-service-account).
-
-### Control 5.3.2
-Ensure that all Namespaces have Network Policies defined
-
-**Rationale**
-Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints.
-
-Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace.
-
-**Remediation**
-This can be remediated by starting RKE2 with the `profile` flag set in the configuration file as described [above](#rke2-configuration).
-
## Conclusion
-If you have followed this guide, your RKE2 cluster will be configured to pass the CIS Kubernetes Benchmark. You can review our CIS Benchmark Self-Assessment Guide [v1.6](cis_self_assessment16.md) or [v1.23](cis_self_assessment123.md) to understand how we verified each of the benchmarks and how you can do the same on your cluster.
+If you have followed this guide, your RKE2 cluster will be configured to pass the CIS Kubernetes Benchmark. You can review our CIS Self-Assessment Guides to understand how we verified each of the benchmarks and how you can do the same on your cluster.
diff --git a/scripts/kubebench-to-markdown.sh b/scripts/kubebench-to-markdown.sh
new file mode 100755
index 00000000..13f9f46b
--- /dev/null
+++ b/scripts/kubebench-to-markdown.sh
@@ -0,0 +1,143 @@
+#!/bin/bash
+
+# To generate the expected json report, run the following command:
+# kube-bench run --benchmark=rke2-cis-1.7 --json > rke2-cis-1.7.json
+
+# Then pass the json file to this script:
+# ./kubebench-to-markdown.sh rke2-cis-1.7.json > cis-1.7.md
+
+
+# Save section titles in array, match later
+suites_raw=$(jq -c '.Controls[]' "$1")
+declare -A suites
+while read -r suite; do
+ id=$(echo "$suite" | jq -r '.id')
+ title=$(echo "$suite" | jq -r '.text')
+ suites[$id]=$title
+done < <(echo "$suites_raw")
+
+sections_raw=$(jq -c '.Controls[].tests[]' "$1")
+declare -A sections
+while read -r section; do
+ id=$(echo "$section" | jq -r '.section')
+ description=$(echo "$section" | jq -r '.desc')
+ sections[$id]=$description
+done < <(echo "$sections_raw")
+
+# Read all result entries, ignore high-level groups
+jq -c '.Controls[].tests[].results[]' "$1" | while read -r result; do
+
+ # Output details in markdown format
+ status=$(echo "$result" | jq -r '.status')
+ id=$(echo "$result" | jq -r '.test_number')
+ title=$(echo "$result" | jq -r '.test_desc')
+ audit=$(echo "$result" | jq -r '.audit')
+ expected_result=$(echo "$result" | jq -r '.expected_result')
+ actual_value=$(echo "$result" | jq -r '.actual_value')
+ remediation=$(echo "$result" | jq -r '.remediation')
+ # check if suite matches the start of id
+ suite_id_found=""
+ for suite_id in "${!suites[@]}"; do
+ if [[ $id == $suite_id* ]]; then
+ suite_id_found=$suite_id
+ echo "## $suite_id ${suites[$suite_id]}"
+ echo
+ fi
+ done
+ if [ -n "$suite_id_found" ]; then
+ unset suites["$suite_id_found"]
+ fi
+ # check if section matches the start of id
+ section_id_found=""
+ for section_id in "${!sections[@]}"; do
+ if [[ $id == $section_id* ]]; then
+ section_id_found=$section_id
+ echo "### $section_id ${sections[$section_id]}"
+ echo
+ fi
+ done
+ if [ -n "$section_id_found" ]; then
+ unset sections["$section_id_found"]
+ fi
+ echo "#### $id $title"
+ echo
+
+ # fix html special characters and misspellings
+ remediation=${remediation///<file>}
+ remediation=$(perl -pe 's/(--kube.*?=)<(.*?)>/\1<\2>/g' <<< "$remediation")
+ remediation=${remediation/capabilites/capabilities}
+ remediation=${remediation/applicaions/applications}
+
+ # encase kube-XXX-args yaml block in ```
+ if [[ "$remediation" =~ (kube-.*-arg:.*) ]]; then
+ remediation=$(perl -pe 'BEGIN{undef $/} s/^kube-.*-arg:(\n -\s.*)+/```\n$&\n```/mg' <<< "$remediation")
+ fi
+ if [[ "$remediation" =~ (kubelet-arg:.*) ]]; then
+ remediation=$(perl -pe 'BEGIN{undef $/} s/^kubelet-arg:(\n -\s.*)+/```\n$&\n```/mg' <<< "$remediation")
+ fi
+ # encase chown and chmod commands in `
+ if [[ "$remediation" =~ (chown.*) ]]; then
+ remediation=$(perl -pe 's/(chown.*)/`$1`/g' <<< "$remediation")
+ fi
+ if [[ "$remediation" =~ (chmod.*) ]]; then
+ remediation=$(perl -pe 's/(chmod.*)/`$1`/g' <<< "$remediation")
+ fi
+
+ case $status in
+ PASS | FAIL)
+ # Remove curly braces from expected result, conflicts with html embedding
+ expected_result=${expected_result//\{/}
+ expected_result=${expected_result//\}/}
+ echo "**Result:** $status"
+ echo
+ echo "**Audit:**"
+ echo "\`\`\`bash"
+ echo "$audit"
+ echo "\`\`\`"
+ echo
+ echo "**Expected Result:** $expected_result"
+ echo
+ echo ""
+ echo "Returned Value:
"
+ echo
+ echo "\`\`\`console"
+ echo "$actual_value"
+ echo "\`\`\`"
+ echo " "
+ echo
+ echo ""
+ echo "Remediation:
"
+ echo
+ echo "$remediation"
+ echo " "
+ echo
+ ;;
+ WARN)
+ # fix html special characters and misspellings
+ echo "**Result:** $status"
+ echo
+ echo "**Remediation:**"
+ echo "$remediation"
+ echo
+ ;;
+ INFO)
+ # if remediation starts with "Not Applicable." We know its a ignored check,
+ # The remediation is actually the rationale for ignoring the check
+ if [[ $remediation == "Not Applicable."* ]]; then
+ remediation=${remediation//Not Applicable./}
+ echo "**Result:** Not Applicable"
+ echo
+ echo "**Rationale:**"
+ echo "$remediation"
+ echo
+ continue
+ else
+ echo "**Result:** $status"
+ echo
+ echo "**Remediation:**"
+ echo "$remediation"
+ echo
+ fi
+ ;;
+ esac
+done
\ No newline at end of file
diff --git a/sidebars.js b/sidebars.js
index dc07e2d2..6693ff6c 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -32,8 +32,9 @@ module.exports = {
items:[
'security/about_hardened_images',
'security/hardening_guide',
- 'security/cis_self_assessment16',
- 'security/cis_self_assessment123',
+ 'security/cis_self_assessment18',
+ 'security/cis_self_assessment17',
+ 'security/cis_self_assessment124',
'security/fips_support',
'security/pod_security_policies',
'security/pod_security_standards',