diff --git a/Documentation/CRDs/Cluster/external-cluster/external-cluster.md b/Documentation/CRDs/Cluster/external-cluster/external-cluster.md index 1be2aa9116b5..2bedb17026eb 100644 --- a/Documentation/CRDs/Cluster/external-cluster/external-cluster.md +++ b/Documentation/CRDs/Cluster/external-cluster/external-cluster.md @@ -25,13 +25,13 @@ In order to configure an external Ceph cluster with Rook, we need to extract som ### 1. Create all users and keys -Run the python script [create-external-cluster-resources.py](https://github.com/rook/rook/blob/master/deploy/examples/create-external-cluster-resources.py) for creating all users and keys. +Run the python script [create-external-cluster-resources.py](https://github.com/rook/rook/blob/master/deploy/examples/external/create-external-cluster-resources.py) for creating all users and keys. ```console python3 create-external-cluster-resources.py --rbd-data-pool-name --cephfs-filesystem-name --rgw-endpoint --namespace --format bash ``` -* `--namespace`: Namespace where CephCluster will run, for example `rook-ceph-external` +* `--namespace`: Namespace where CephCluster will run, for example `rook-ceph` * `--format bash`: The format of the output * `--rbd-data-pool-name`: The name of the RBD data pool * `--alias-rbd-data-pool-name`: Provides an alias for the RBD data pool name, necessary if a special character is present in the pool name such as a period or underscore @@ -40,7 +40,7 @@ python3 create-external-cluster-resources.py --rbd-data-pool-name -- * `--rgw-tls-cert-path`: (optional) RADOS Gateway endpoint TLS certificate file path * `--rgw-skip-tls`: (optional) Ignore TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED) * `--rbd-metadata-ec-pool-name`: (optional) Provides the name of erasure coded RBD metadata pool, used for creating ECRBDStorageClass. -* `--monitoring-endpoint`: (optional) Ceph Manager prometheus exporter endpoints (comma separated list of entries of active and standby mgrs) +* `--monitoring-endpoint`: (optional) Ceph Manager prometheus exporter endpoints (comma separated list of IP entries of active and standby mgrs) * `--monitoring-endpoint-port`: (optional) Ceph Manager prometheus exporter port * `--skip-monitoring-endpoint`: (optional) Skip prometheus exporter endpoints, even if they are available. Useful if the prometheus module is not enabled * `--ceph-conf`: (optional) Provide a Ceph conf file @@ -60,9 +60,9 @@ python3 create-external-cluster-resources.py --rbd-data-pool-name -- * `--upgrade`: (optional) Upgrades the cephCSIKeyrings(For example: client.csi-cephfs-provisioner) and client.healthchecker ceph users with new permissions needed for the new cluster version and older permission will still be applied. * `--restricted-auth-permission`: (optional) Restrict cephCSIKeyrings auth permissions to specific pools, and cluster. Mandatory flags that need to be set are `--rbd-data-pool-name`, and `--k8s-cluster-name`. `--cephfs-filesystem-name` flag can also be passed in case of CephFS user restriction, so it can restrict users to particular CephFS filesystem. * `--v2-port-enable`: (optional) Enables the v2 mon port (3300) for mons. -* `--topology-pools`: (optional) Comma-separated list of topology-constrained rbd pools -* `--topology-failure-domain-label`: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain -* `--topology-failure-domain-values`: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the `topology-pools` list +* `--topology-pools`: (optional) Comma-separated list of topology-constrained rbd pools +* `--topology-failure-domain-label`: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain +* `--topology-failure-domain-values`: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the `topology-pools` list ### Multi-tenancy @@ -71,11 +71,12 @@ It will generate the secrets which you can use for creating new `Consumer cluste So you would be running different isolated consumer clusters on top of single `Source cluster`. !!! note + Restricting the csi-users per pool, and per cluster will require creating new csi-users and new secrets for that csi-users. So apply these secrets only to new `Consumer cluster` deployment while using the same `Source cluster`. ```console -python3 create-external-cluster-resources.py --cephfs-filesystem-name --rbd-data-pool-name --k8s-cluster-name --restricted-auth-permission true --format --rgw-endpoint --namespace +python3 create-external-cluster-resources.py --cephfs-filesystem-name --rbd-data-pool-name --k8s-cluster-name --restricted-auth-permission true --format --rgw-endpoint --namespace ``` ### RGW Multisite @@ -95,23 +96,23 @@ The storageclass is used to create a volume in the pool matching the topology wh For more details, see the [Topology-Based Provisioning](topology-for-external-mode.md) - ### Upgrade Example -1) If consumer cluster doesn't have restricted caps, this will upgrade all the default csi-users (non-restricted): +1. If consumer cluster doesn't have restricted caps, this will upgrade all the default csi-users (non-restricted): -```console -python3 create-external-cluster-resources.py --upgrade -``` + ```console + python3 create-external-cluster-resources.py --upgrade + ``` -2) If the consumer cluster has restricted caps: +2. If the consumer cluster has restricted caps: Restricted users created using `--restricted-auth-permission` flag need to pass mandatory flags: '`--rbd-data-pool-name`(if it is a rbd user), `--k8s-cluster-name` and `--run-as-user`' flags while upgrading, in case of cephfs users if you have passed `--cephfs-filesystem-name` flag while creating csi-users then while upgrading it will be mandatory too. In this example the user would be `client.csi-rbd-node-rookstorage-replicapool` (following the pattern `csi-user-clusterName-poolName`) -```console -python3 create-external-cluster-resources.py --upgrade --rbd-data-pool-name replicapool --k8s-cluster-name rookstorage --run-as-user client.csi-rbd-node-rookstorage-replicapool -``` + ```console + python3 create-external-cluster-resources.py --upgrade --rbd-data-pool-name replicapool --k8s-cluster-name rookstorage --run-as-user client.csi-rbd-node-rookstorage-replicapool + ``` !!! note + An existing non-restricted user cannot be converted to a restricted user by upgrading. The upgrade flag should only be used to append new permissions to users. It shouldn't be used for changing a csi user already applied permissions. For example, you shouldn't change the pool(s) a user has access to. @@ -120,9 +121,11 @@ python3 create-external-cluster-resources.py --upgrade --rbd-data-pool-name repl If in case the cluster needs the admin keyring to configure, update the admin key `rook-ceph-mon` secret with client.admin keyring !!! note + Sharing the admin key with the external cluster is not generally recommended 1. Get the `client.admin` keyring from the ceph cluster + ```console ceph auth get client.admin ``` @@ -158,13 +161,13 @@ export RGW_POOL_PREFIX=default To install with Helm, the rook cluster helm chart will configure the necessary resources for the external cluster with the example `values-external.yaml`. ```console - clusterNamespace=rook-ceph - operatorNamespace=rook-ceph - cd deploy/examples/charts/rook-ceph-cluster - helm repo add rook-release https://charts.rook.io/release - helm install --create-namespace --namespace $clusterNamespace rook-ceph rook-release/rook-ceph -f values.yaml - helm install --create-namespace --namespace $clusterNamespace rook-ceph-cluster \ - --set operatorNamespace=$operatorNamespace rook-release/rook-ceph-cluster -f values-external.yaml +clusterNamespace=rook-ceph +operatorNamespace=rook-ceph +cd deploy/examples/charts/rook-ceph-cluster +helm repo add rook-release https://charts.rook.io/release +helm install --create-namespace --namespace $clusterNamespace rook-ceph rook-release/rook-ceph -f values.yaml +helm install --create-namespace --namespace $clusterNamespace rook-ceph-cluster \ +--set operatorNamespace=$operatorNamespace rook-release/rook-ceph-cluster -f values-external.yaml ``` Skip the manifest installation section and continue with [Cluster Verification](#cluster-verification). @@ -175,7 +178,7 @@ If not installing with Helm, here are the steps to install with manifests. 1. Deploy Rook, create [common.yaml](https://github.com/rook/rook/blob/master/deploy/examples/common.yaml), [crds.yaml](https://github.com/rook/rook/blob/master/deploy/examples/crds.yaml) and [operator.yaml](https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml) manifests. -2. Create [common-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/common-external.yaml) and [cluster-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/cluster-external.yaml) +2. Create [common-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/external/common-external.yaml) and [cluster-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/external/cluster-external.yaml) ### Import the Source Data @@ -186,15 +189,22 @@ If not installing with Helm, here are the steps to install with manifests. changing the current context, you can specify the cluster name by setting the KUBECONTEXT environment variable. - ```console - export KUBECONTEXT= - ``` + ```console + export KUBECONTEXT= + ``` + +3. Here is the link for [import](https://github.com/rook/rook/blob/master/deploy/examples/external/import-external-cluster.sh) script. The script has used the `rook-ceph` namespace and few parameters that also have referenced from namespace variable. If user's external cluster has a different namespace, change the namespace parameter in the script according to their external cluster. For example with `new-namespace` namespace, this change is needed on the namespace parameter in the script. + + ```console + NAMESPACE=${NAMESPACE:="new-namespace"} + ``` + +4. Run the import script. -3. Run the [import](https://github.com/rook/rook/blob/master/deploy/examples/import-external-cluster.sh) script. + !!! note - !!! note - If your Rook cluster nodes are running a kernel earlier than or equivalent to 5.4, remove - `fast-diff,object-map,deep-flatten,exclusive-lock` from the `imageFeatures` line. + If your Rook cluster nodes are running a kernel earlier than or equivalent to 5.4, remove + `fast-diff, object-map, deep-flatten,exclusive-lock` from the `imageFeatures` line. ```console . import-external-cluster.sh @@ -205,7 +215,7 @@ If not installing with Helm, here are the steps to install with manifests. 1. Verify the consumer cluster is connected to the source ceph cluster: ```console - $ kubectl -n rook-ceph-external get CephCluster + $ kubectl -n rook-ceph get CephCluster NAME DATADIRHOSTPATH MONCOUNT AGE STATE HEALTH rook-ceph-external /var/lib/rook 162m Connected HEALTH_OK ``` @@ -214,38 +224,39 @@ If not installing with Helm, here are the steps to install with manifests. `ceph-rbd` and `cephfs` would be the respective names for the RBD and CephFS storage classes. ```console - kubectl -n rook-ceph-external get sc + kubectl -n rook-ceph get sc ``` 3. Then you can now create a [persistent volume](https://github.com/rook/rook/tree/master/deploy/examples/csi) based on these StorageClass. ### Connect to an External Object Store -Create the [external object store CR](https://github.com/rook/rook/blob/master/deploy/examples/object-external.yaml) to configure connection to external gateways. +Create the [external object store CR](https://github.com/rook/rook/blob/master/deploy/examples/external/object-external.yaml) to configure connection to external gateways. ```console - cd deploy/examples - kubectl create -f object-external.yaml +cd deploy/examples/external +kubectl create -f object-external.yaml ``` Consume the S3 Storage, in two different ways: 1. Create an [Object store user](https://github.com/rook/rook/blob/master/deploy/examples/object-user.yaml) for credentials to access the S3 endpoint. -```console + ```console cd deploy/examples kubectl create -f object-user.yaml -``` + ``` -2. Create a [bucket storage class](https://github.com/rook/rook/blob/master/deploy/examples/storageclass-bucket-delete.yaml) where a client can request creating buckets and then create the [Object Bucket Claim](https://github.com/rook/rook/blob/master/deploy/examples/object-bucket-claim-delete.yaml), which will create an individual bucket for reading and writing objects. +2. Create a [bucket storage class](https://github.com/rook/rook/blob/master/deploy/examples/external/storageclass-bucket-delete.yaml) where a client can request creating buckets and then create the [Object Bucket Claim](https://github.com/rook/rook/blob/master/deploy/examples/external/object-bucket-claim-delete.yaml), which will create an individual bucket for reading and writing objects. -```console - cd deploy/examples + ```console + cd deploy/examples/external kubectl create -f storageclass-bucket-delete.yaml kubectl create -f object-bucket-claim-delete.yaml -``` + ``` !!! hint + For more details see the [Object Store topic](../../../Storage-Configuration/Object-Storage-RGW/object-storage.md#connect-to-an-external-object-store) ### Connect to v2 mon port @@ -258,14 +269,15 @@ If the v2 address type is present in the `ceph quorum_status`, then the output o If you have multiple K8s clusters running, and want to use the local `rook-ceph` cluster as the central storage, you can export the settings from this cluster with the following steps. -1) Copy create-external-cluster-resources.py into the directory `/etc/ceph/` of the toolbox. +1. Copy create-external-cluster-resources.py into the directory `/etc/ceph/` of the toolbox. - ```console - toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}') - kubectl -n rook-ceph cp deploy/examples/create-external-cluster-resources.py $toolbox:/etc/ceph - ``` + ```console + toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}') + kubectl -n rook-ceph cp deploy/examples/external/create-external-cluster-resources.py $toolbox:/etc/ceph + ``` -2) Exec to the toolbox pod and execute create-external-cluster-resources.py with needed options to create required [users and keys](#supported-features). +2. Exec to the toolbox pod and execute create-external-cluster-resources.py with needed options to create required [users and keys](#1-create-all-users-and-keys). !!! important - For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs. + + For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs. diff --git a/Documentation/Getting-Started/example-configurations.md b/Documentation/Getting-Started/example-configurations.md index 25ac95b6f4e7..b48d992be1ee 100644 --- a/Documentation/Getting-Started/example-configurations.md +++ b/Documentation/Getting-Started/example-configurations.md @@ -38,7 +38,7 @@ the cluster. These examples represent several different ways to configure the st * [`cluster.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml): Common settings for a production storage cluster. Requires at least three worker nodes. * [`cluster-test.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-test.yaml): Settings for a test cluster where redundancy is not configured. Requires only a single node. * [`cluster-on-pvc.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-on-pvc.yaml): Common settings for backing the Ceph Mons and OSDs by PVs. Useful when running in cloud environments or where local PVs have been created for Ceph to consume. -* [`cluster-external.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-external.yaml): Connect to an [external Ceph cluster](../CRDs/Cluster/ceph-cluster-crd.md#external-cluster) with minimal access to monitor the health of the cluster and connect to the storage. +* [`cluster-external.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/external/cluster-external.yaml): Connect to an [external Ceph cluster](../CRDs/Cluster/ceph-cluster-crd.md#external-cluster) with minimal access to monitor the health of the cluster and connect to the storage. * [`cluster-external-management.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-external-management.yaml): Connect to an [external Ceph cluster](../CRDs/Cluster/ceph-cluster-crd.md#external-cluster) with the admin key of the external cluster to enable remote creation of pools and configure services such as an [Object Store](../Storage-Configuration/Object-Storage-RGW/object-storage.md) or a [Shared Filesystem](../Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md). * [`cluster-stretched.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-stretched.yaml): Create a cluster in "stretched" mode, with five mons stretched across three zones, and the OSDs across two zones. See the [Stretch documentation](../CRDs/Cluster/ceph-cluster-crd.md#stretch-cluster). diff --git a/deploy/examples/external/cluster-external.yaml b/deploy/examples/external/cluster-external.yaml new file mode 100644 index 000000000000..afe95c76a0c6 --- /dev/null +++ b/deploy/examples/external/cluster-external.yaml @@ -0,0 +1,30 @@ +################################################################################################################# +# If Rook is not managing any existing cluster in the 'rook-ceph' namespace do: +# kubectl create -f ../../examples/crds.yaml -f ../../examples/common.yaml -f ../../examples/operator.yaml +# kubectl create -f common-external.yaml -f cluster-external.yaml +# +# If there is already a cluster managed by Rook in 'rook-ceph' then do: +# kubectl create -f common-external.yaml +# kubectl create -f common-external.yaml +################################################################################################################# +apiVersion: ceph.rook.io/v1 +kind: CephCluster +metadata: + name: rook-ceph-external + namespace: rook-ceph # namespace:cluster +spec: + external: + enable: true + crashCollector: + disable: true + healthCheck: + daemonHealth: + mon: + disabled: false + interval: 45s + # optionally, the ceph-mgr IP address can be passed to gather metric from the prometheus exporter + # monitoring: + # enabled: true + # externalMgrEndpoints: + # - ip: ip + # externalMgrPrometheusPort: 9283 diff --git a/deploy/examples/external/common-external.yaml b/deploy/examples/external/common-external.yaml new file mode 100644 index 000000000000..2a8c7b21427d --- /dev/null +++ b/deploy/examples/external/common-external.yaml @@ -0,0 +1,68 @@ +################################################################################################################### +# kubectl create -f ../../examples/crds.yaml -f ../../examples/common.yaml -f ../../examples/operator.yaml +# +# Then kubectl create -f common-external.yaml -f cluster-external.yaml +################################################################################################################### +apiVersion: v1 +kind: Namespace +metadata: + name: rook-ceph # namespace:cluster +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-cluster-mgmt + namespace: rook-ceph # namespace:cluster +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: rook-ceph-cluster-mgmt +subjects: + - kind: ServiceAccount + name: rook-ceph-system + namespace: rook-ceph # namespace:operator +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-cmd-reporter + namespace: rook-ceph # namespace:cluster +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: rook-ceph-cmd-reporter +subjects: + - kind: ServiceAccount + name: rook-ceph-cmd-reporter + namespace: rook-ceph # namespace:cluster +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rook-ceph-cmd-reporter + namespace: rook-ceph # namespace:cluster +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rook-ceph-default + namespace: rook-ceph # namespace:cluster +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-cmd-reporter + namespace: rook-ceph # namespace:cluster +rules: + - apiGroups: + - "" + resources: + - pods + - configmaps + verbs: + - get + - list + - watch + - create + - update + - delete diff --git a/deploy/examples/external/create-external-cluster-resources-tests.py b/deploy/examples/external/create-external-cluster-resources-tests.py new file mode 120000 index 000000000000..43635c765bcf --- /dev/null +++ b/deploy/examples/external/create-external-cluster-resources-tests.py @@ -0,0 +1 @@ +../create-external-cluster-resources-tests.py \ No newline at end of file diff --git a/deploy/examples/external/create-external-cluster-resources.py b/deploy/examples/external/create-external-cluster-resources.py new file mode 120000 index 000000000000..125a2fbc4be8 --- /dev/null +++ b/deploy/examples/external/create-external-cluster-resources.py @@ -0,0 +1 @@ +../create-external-cluster-resources.py \ No newline at end of file diff --git a/deploy/examples/external/dashboard-external-http.yaml b/deploy/examples/external/dashboard-external-http.yaml new file mode 120000 index 000000000000..c3304a916950 --- /dev/null +++ b/deploy/examples/external/dashboard-external-http.yaml @@ -0,0 +1 @@ +../dashboard-external-http.yaml \ No newline at end of file diff --git a/deploy/examples/external/dashboard-external-https.yaml b/deploy/examples/external/dashboard-external-https.yaml new file mode 120000 index 000000000000..db81e75ffdb0 --- /dev/null +++ b/deploy/examples/external/dashboard-external-https.yaml @@ -0,0 +1 @@ +../dashboard-external-https.yaml \ No newline at end of file diff --git a/deploy/examples/external/import-external-cluster.sh b/deploy/examples/external/import-external-cluster.sh new file mode 120000 index 000000000000..8056a67af34b --- /dev/null +++ b/deploy/examples/external/import-external-cluster.sh @@ -0,0 +1 @@ +../import-external-cluster.sh \ No newline at end of file diff --git a/deploy/examples/external/object-bucket-claim-delete.yaml b/deploy/examples/external/object-bucket-claim-delete.yaml new file mode 120000 index 000000000000..46c2ea9868e1 --- /dev/null +++ b/deploy/examples/external/object-bucket-claim-delete.yaml @@ -0,0 +1 @@ +../object-bucket-claim-delete.yaml \ No newline at end of file diff --git a/deploy/examples/external/object-external.yaml b/deploy/examples/external/object-external.yaml new file mode 120000 index 000000000000..b78e65d7bf2f --- /dev/null +++ b/deploy/examples/external/object-external.yaml @@ -0,0 +1 @@ +../object-external.yaml \ No newline at end of file diff --git a/deploy/examples/external/storageclass-bucket-delete.yaml b/deploy/examples/external/storageclass-bucket-delete.yaml new file mode 120000 index 000000000000..3be95bf1ec87 --- /dev/null +++ b/deploy/examples/external/storageclass-bucket-delete.yaml @@ -0,0 +1 @@ +../storageclass-bucket-delete.yaml \ No newline at end of file diff --git a/deploy/examples/import-external-cluster.sh b/deploy/examples/import-external-cluster.sh index ed17a7abc950..c9afb003a91d 100644 --- a/deploy/examples/import-external-cluster.sh +++ b/deploy/examples/import-external-cluster.sh @@ -4,7 +4,7 @@ set -e ############## # VARIABLES # ############# -NAMESPACE=${NAMESPACE:="rook-ceph-external"} +NAMESPACE=${NAMESPACE:="rook-ceph"} MON_SECRET_NAME=rook-ceph-mon RGW_ADMIN_OPS_USER_SECRET_NAME=rgw-admin-ops-user MON_SECRET_CLUSTER_NAME_KEYNAME=cluster-name