Skip to content

Commit

Permalink
Merge pull request rook#13362 from parth-gr/service-account-deafult
Browse files Browse the repository at this point in the history
core: Set default service account on all Ceph daemons
  • Loading branch information
travisn authored Feb 28, 2024
2 parents e02c5b8 + f7a9d8f commit d77cee7
Show file tree
Hide file tree
Showing 26 changed files with 93 additions and 42 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,7 @@ title: Authenticated Container Registries
---

If you want to use an image from authenticated docker registry (e.g. for image cache/mirror), you'll need to
add an `imagePullSecret` to all relevant service accounts. This way all pods created by the operator (for service account:
`rook-ceph-system`) or all new pods in the namespace (for service account: `default`) will have the `imagePullSecret` added
to their spec.
add an `imagePullSecret` to all relevant service accounts. See the next section for the required service accounts.

The whole process is described in the [official kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).

Expand All @@ -29,25 +27,22 @@ imagePullSecrets:
The service accounts are:
* `rook-ceph-system` (namespace: `rook-ceph`): Will affect all pods created by the rook operator in the `rook-ceph` namespace.
* `default` (namespace: `rook-ceph`): Will affect most pods in the `rook-ceph` namespace.
* `rook-ceph-default` (namespace: `rook-ceph`): Will affect most pods in the `rook-ceph` namespace.
* `rook-ceph-mgr` (namespace: `rook-ceph`): Will affect the MGR pods in the `rook-ceph` namespace.
* `rook-ceph-osd` (namespace: `rook-ceph`): Will affect the OSD pods in the `rook-ceph` namespace.
* `rook-ceph-rgw` (namespace: `rook-ceph`): Will affect the RGW pods in the `rook-ceph` namespace.

You can do it either via e.g. `kubectl -n <namespace> edit serviceaccount default` or by modifying the [`operator.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml)
and [`cluster.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml) before deploying them.

Since it's the same procedure for all service accounts, here is just one example:

```console
kubectl -n rook-ceph edit serviceaccount default
kubectl -n rook-ceph edit serviceaccount rook-ceph-default
```

```yaml hl_lines="9-10"
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
name: rook-ceph-default
namespace: rook-ceph
secrets:
- name: default-token-12345
Expand Down
1 change: 1 addition & 0 deletions PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,4 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https
## Features

- Kubernetes versions **v1.24** through **v1.29** are supported.
- Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account.
2 changes: 1 addition & 1 deletion build/csv/csv-gen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ASSEMBLE_FILE_OCP="../../deploy/olm/assemble/metadata-ocp.yaml"
#############

function generate_csv() {
kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-system,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter
kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-default,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter

# cleanup to get the expected state before merging the real data from assembles
"${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.icon[*]'
Expand Down
11 changes: 11 additions & 0 deletions deploy/charts/library/templates/_cluster-serviceaccount.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,15 @@ metadata:
storage-backend: ceph
{{- include "library.rook-ceph.labels" . | nindent 4 }}
{{ include "library.imagePullSecrets" . }}
---
# Service account for other components
apiVersion: v1
kind: ServiceAccount
metadata:
name: rook-ceph-default
namespace: {{ .Release.Namespace }} # namespace:cluster
labels:
operator: rook
storage-backend: ceph
{{ include "library.imagePullSecrets" . }}
{{ end }}
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,5 @@ users:
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-mgr
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-osd
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-rgw
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-default
{{- end }}
12 changes: 12 additions & 0 deletions deploy/examples/common-second-cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,18 @@ metadata:
name: rook-ceph-mgr
namespace: rook-ceph-secondary # namespace:cluster
---
# Service account for other components
apiVersion: v1
kind: ServiceAccount
metadata:
name: rook-ceph-default
namespace: rook-ceph-secondary # namespace:cluster
labels:
operator: rook
storage-backend: ceph
# imagePullSecrets:
# - name: my-registry-secret
---
apiVersion: v1
kind: ServiceAccount
metadata:
Expand Down
12 changes: 12 additions & 0 deletions deploy/examples/common.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1154,6 +1154,18 @@ metadata:
# imagePullSecrets:
# - name: my-registry-secret
---
# Service account for other components
apiVersion: v1
kind: ServiceAccount
metadata:
name: rook-ceph-default
namespace: rook-ceph # namespace:cluster
labels:
operator: rook
storage-backend: ceph
# imagePullSecrets:
# - name: my-registry-secret
---
# Service account for Ceph mgrs
apiVersion: v1
kind: ServiceAccount
Expand Down
2 changes: 1 addition & 1 deletion pkg/apis/ceph.rook.io/v1/scc.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ func NewSecurityContextConstraints(name string, namespaces ...string) *secv1.Sec
for _, ns := range namespaces {
users = append(users, []string{
fmt.Sprintf("system:serviceaccount:%s:rook-ceph-system", ns),
fmt.Sprintf("system:serviceaccount:%s:default", ns),
fmt.Sprintf("system:serviceaccount:%s:rook-ceph-default", ns),
fmt.Sprintf("system:serviceaccount:%s:rook-ceph-mgr", ns),
fmt.Sprintf("system:serviceaccount:%s:rook-ceph-osd", ns),
fmt.Sprintf("system:serviceaccount:%s:rook-ceph-rgw", ns),
Expand Down
7 changes: 4 additions & 3 deletions pkg/operator/ceph/cluster/cleanup.go
Original file line number Diff line number Diff line change
Expand Up @@ -158,9 +158,10 @@ func (c *ClusterController) cleanUpJobTemplateSpec(cluster *cephv1.CephCluster,
Containers: []v1.Container{
c.cleanUpJobContainer(cluster, monSecret, clusterFSID),
},
Volumes: volumes,
RestartPolicy: v1.RestartPolicyOnFailure,
PriorityClassName: cephv1.GetCleanupPriorityClassName(cluster.Spec.PriorityClassNames),
Volumes: volumes,
RestartPolicy: v1.RestartPolicyOnFailure,
PriorityClassName: cephv1.GetCleanupPriorityClassName(cluster.Spec.PriorityClassNames),
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}

Expand Down
7 changes: 4 additions & 3 deletions pkg/operator/ceph/cluster/mon/spec.go
Original file line number Diff line number Diff line change
Expand Up @@ -186,9 +186,10 @@ func (c *Cluster) makeMonPod(monConfig *monConfig, canary bool) (*corev1.Pod, er
RestartPolicy: corev1.RestartPolicyAlways,
// we decide later whether to use a PVC volume or host volumes for mons, so only populate
// the base volumes at this point.
Volumes: controller.DaemonVolumesBase(monConfig.DataPathMap, keyringStoreName, c.spec.DataDirHostPath),
HostNetwork: monConfig.UseHostNetwork,
PriorityClassName: cephv1.GetMonPriorityClassName(c.spec.PriorityClassNames),
Volumes: controller.DaemonVolumesBase(monConfig.DataPathMap, keyringStoreName, c.spec.DataDirHostPath),
HostNetwork: monConfig.UseHostNetwork,
PriorityClassName: cephv1.GetMonPriorityClassName(c.spec.PriorityClassNames),
ServiceAccountName: k8sutil.DefaultServiceAccount,
}

// If the log collector is enabled we add the side-car container
Expand Down
1 change: 1 addition & 0 deletions pkg/operator/ceph/cluster/mon/spec_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ func testPodSpec(t *testing.T, monID string, pvc bool) {
d, err := c.makeDeployment(monConfig, false)
assert.NoError(t, err)
assert.NotNil(t, d)
assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName)

if pvc {
d.Spec.Template.Spec.Volumes = append(
Expand Down
11 changes: 6 additions & 5 deletions pkg/operator/ceph/cluster/nodedaemon/crash.go
Original file line number Diff line number Diff line change
Expand Up @@ -116,11 +116,12 @@ func (r *ReconcileNode) createOrUpdateCephCrash(node corev1.Node, tolerations []
Containers: []corev1.Container{
getCrashDaemonContainer(cephCluster, *cephVersion),
},
Tolerations: tolerations,
RestartPolicy: corev1.RestartPolicyAlways,
HostNetwork: cephCluster.Spec.Network.IsHost(),
Volumes: volumes,
PriorityClassName: cephv1.GetCrashCollectorPriorityClassName(cephCluster.Spec.PriorityClassNames),
Tolerations: tolerations,
RestartPolicy: corev1.RestartPolicyAlways,
HostNetwork: cephCluster.Spec.Network.IsHost(),
Volumes: volumes,
PriorityClassName: cephv1.GetCrashCollectorPriorityClassName(cephCluster.Spec.PriorityClassNames),
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}

Expand Down
1 change: 1 addition & 0 deletions pkg/operator/ceph/cluster/nodedaemon/exporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@ func (r *ReconcileNode) createOrUpdateCephExporter(node corev1.Node, tolerations
Volumes: volumes,
PriorityClassName: cephv1.GetCephExporterPriorityClassName(cephCluster.Spec.PriorityClassNames),
TerminationGracePeriodSeconds: &terminationGracePeriodSeconds,
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}
cephv1.GetCephExporterAnnotations(cephCluster.Spec.Annotations).ApplyToObjectMeta(&deploy.Spec.Template.ObjectMeta)
Expand Down
1 change: 1 addition & 0 deletions pkg/operator/ceph/cluster/nodedaemon/exporter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ func TestCreateOrUpdateCephExporter(t *testing.T) {
assert.Equal(t, tolerations, podSpec.Spec.Tolerations)
assert.Equal(t, false, podSpec.Spec.HostNetwork)
assert.Equal(t, "", podSpec.Spec.PriorityClassName)
assert.Equal(t, k8sutil.DefaultServiceAccount, podSpec.Spec.ServiceAccountName)

assertCephExporterArgs(t, podSpec.Spec.Containers[0].Args, cephCluster.Spec.Network.DualStack || cephCluster.Spec.Network.IPFamily == "IPv6")

Expand Down
7 changes: 4 additions & 3 deletions pkg/operator/ceph/cluster/nodedaemon/pruner.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,9 +107,10 @@ func (r *ReconcileNode) createOrUpdateCephCron(cephCluster cephv1.CephCluster, c
Containers: []corev1.Container{
getCrashPruneContainer(cephCluster, *cephVersion),
},
RestartPolicy: corev1.RestartPolicyNever,
HostNetwork: cephCluster.Spec.Network.IsHost(),
Volumes: volumes,
RestartPolicy: corev1.RestartPolicyNever,
HostNetwork: cephCluster.Spec.Network.IsHost(),
Volumes: volumes,
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}

Expand Down
9 changes: 5 additions & 4 deletions pkg/operator/ceph/cluster/rbd/spec.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,11 @@ func (r *ReconcileCephRBDMirror) makeDeployment(daemonConfig *daemonConfig, rbdM
Containers: []v1.Container{
r.makeMirroringDaemonContainer(daemonConfig, rbdMirror),
},
RestartPolicy: v1.RestartPolicyAlways,
Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath),
HostNetwork: r.cephClusterSpec.Network.IsHost(),
PriorityClassName: rbdMirror.Spec.PriorityClassName,
RestartPolicy: v1.RestartPolicyAlways,
Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath),
HostNetwork: r.cephClusterSpec.Network.IsHost(),
PriorityClassName: rbdMirror.Spec.PriorityClassName,
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}

Expand Down
3 changes: 2 additions & 1 deletion pkg/operator/ceph/cluster/rbd/spec_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ import (
"github.com/rook/rook/pkg/client/clientset/versioned/scheme"
cephclient "github.com/rook/rook/pkg/daemon/ceph/client"
"github.com/rook/rook/pkg/operator/ceph/config"

"github.com/rook/rook/pkg/operator/ceph/test"
cephver "github.com/rook/rook/pkg/operator/ceph/version"
"github.com/rook/rook/pkg/operator/k8sutil"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
Expand Down Expand Up @@ -91,6 +91,7 @@ func TestPodSpec(t *testing.T) {
assert.Equal(t, 5, len(d.Spec.Template.Spec.Volumes))
assert.Equal(t, 1, len(d.Spec.Template.Spec.Volumes[0].Projected.Sources))
assert.Equal(t, 5, len(d.Spec.Template.Spec.Containers[0].VolumeMounts))
assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName)

// Deployment should have Ceph labels
test.AssertLabelsContainCephRequirements(t, d.ObjectMeta.Labels,
Expand Down
9 changes: 5 additions & 4 deletions pkg/operator/ceph/file/mds/spec.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,11 @@ func (c *Cluster) makeDeployment(mdsConfig *mdsConfig, fsNamespacedname types.Na
Containers: []v1.Container{
mdsContainer,
},
RestartPolicy: v1.RestartPolicyAlways,
Volumes: controller.DaemonVolumes(mdsConfig.DataPathMap, mdsConfig.ResourceName, c.clusterSpec.DataDirHostPath),
HostNetwork: c.clusterSpec.Network.IsHost(),
PriorityClassName: c.fs.Spec.MetadataServer.PriorityClassName,
RestartPolicy: v1.RestartPolicyAlways,
Volumes: controller.DaemonVolumes(mdsConfig.DataPathMap, mdsConfig.ResourceName, c.clusterSpec.DataDirHostPath),
HostNetwork: c.clusterSpec.Network.IsHost(),
PriorityClassName: c.fs.Spec.MetadataServer.PriorityClassName,
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/operator/ceph/file/mds/spec_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ import (
"github.com/rook/rook/pkg/clusterd"
cephclient "github.com/rook/rook/pkg/daemon/ceph/client"
cephver "github.com/rook/rook/pkg/operator/ceph/version"

testop "github.com/rook/rook/pkg/operator/test"
"github.com/stretchr/testify/assert"
apps "k8s.io/api/apps/v1"
Expand Down Expand Up @@ -104,6 +103,7 @@ func TestPodSpecs(t *testing.T) {

assert.NotNil(t, d)
assert.Equal(t, v1.RestartPolicyAlways, d.Spec.Template.Spec.RestartPolicy)
assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName)

// Deployment should have Ceph labels
test.AssertLabelsContainCephRequirements(t, d.ObjectMeta.Labels,
Expand Down
9 changes: 5 additions & 4 deletions pkg/operator/ceph/file/mirror/spec.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,11 @@ func (r *ReconcileFilesystemMirror) makeDeployment(daemonConfig *daemonConfig, f
Containers: []v1.Container{
r.makeFsMirroringDaemonContainer(daemonConfig, fsMirror),
},
RestartPolicy: v1.RestartPolicyAlways,
Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath),
HostNetwork: r.cephClusterSpec.Network.IsHost(),
PriorityClassName: fsMirror.Spec.PriorityClassName,
RestartPolicy: v1.RestartPolicyAlways,
Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath),
HostNetwork: r.cephClusterSpec.Network.IsHost(),
PriorityClassName: fsMirror.Spec.PriorityClassName,
ServiceAccountName: k8sutil.DefaultServiceAccount,
},
}

Expand Down
2 changes: 2 additions & 0 deletions pkg/operator/ceph/file/mirror/spec_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import (
"github.com/rook/rook/pkg/operator/ceph/config"
"github.com/rook/rook/pkg/operator/ceph/test"
cephver "github.com/rook/rook/pkg/operator/ceph/version"
"github.com/rook/rook/pkg/operator/k8sutil"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
Expand Down Expand Up @@ -88,6 +89,7 @@ func TestPodSpec(t *testing.T) {
assert.Equal(t, 5, len(d.Spec.Template.Spec.Volumes))
assert.Equal(t, 1, len(d.Spec.Template.Spec.Volumes[0].Projected.Sources))
assert.Equal(t, 5, len(d.Spec.Template.Spec.Containers[0].VolumeMounts))
assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName)

// Deployment should have Ceph labels
test.AssertLabelsContainCephRequirements(t, d.ObjectMeta.Labels,
Expand Down
3 changes: 2 additions & 1 deletion pkg/operator/ceph/nfs/spec.go
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,8 @@ func (r *ReconcileCephNFS) makeDeployment(nfs *cephv1.CephNFS, cfg daemonConfig)
// for kerberos, nfs-ganesha uses the hostname via getaddrinfo() and uses that when
// connecting to the krb server. give all ganesha servers the same hostname so they can all
// use the same krb credentials to auth
Hostname: fmt.Sprintf("%s-%s", nfs.Namespace, nfs.Name),
Hostname: fmt.Sprintf("%s-%s", nfs.Namespace, nfs.Name),
ServiceAccountName: k8sutil.DefaultServiceAccount,
}
// Replace default unreachable node toleration
k8sutil.AddUnreachableNodeToleration(&podSpec)
Expand Down
2 changes: 2 additions & 0 deletions pkg/operator/ceph/nfs/spec_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import (
cephclient "github.com/rook/rook/pkg/daemon/ceph/client"
"github.com/rook/rook/pkg/operator/ceph/config"
cephver "github.com/rook/rook/pkg/operator/ceph/version"
"github.com/rook/rook/pkg/operator/k8sutil"
optest "github.com/rook/rook/pkg/operator/test"
exectest "github.com/rook/rook/pkg/util/exec/test"
"github.com/stretchr/testify/assert"
Expand Down Expand Up @@ -145,6 +146,7 @@ func TestDeploymentSpec(t *testing.T) {
},
)
assert.Equal(t, "my-priority-class", d.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName)
})

t.Run("with sssd sidecar", func(t *testing.T) {
Expand Down
3 changes: 2 additions & 1 deletion pkg/operator/k8sutil/cmdreporter/cmdreporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,8 @@ func (cr *cmdReporterCfg) initJobSpec() (*batch.Job, error) {
Containers: []v1.Container{
*cmdReporterContainer,
},
RestartPolicy: v1.RestartPolicyOnFailure,
RestartPolicy: v1.RestartPolicyOnFailure,
ServiceAccountName: k8sutil.DefaultServiceAccount,
}
copyBinsVol, _ := copyBinariesVolAndMount()
podSpec.Volumes = []v1.Volume{copyBinsVol}
Expand Down
3 changes: 2 additions & 1 deletion pkg/operator/k8sutil/k8sutil.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,11 @@ const (
PodNamespaceEnvVar = "POD_NAMESPACE"
// NodeNameEnvVar is the env variable for getting the node via downward api
NodeNameEnvVar = "NODE_NAME"

// RookVersionLabelKey is the key used for reporting the Rook version which last created or
// modified a resource.
RookVersionLabelKey = "rook-version"
// DefaultServiceAccount is a service-account used for components that do not specify a dedicated service-account.
DefaultServiceAccount = "rook-ceph-default"
)

// GetK8SVersion gets the version of the running K8S cluster
Expand Down
1 change: 1 addition & 0 deletions tests/framework/installer/ceph_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@ func replaceNamespaces(name, manifest, operatorNamespace, clusterNamespace strin

// SCC namespaces for operator and Ceph daemons
manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-system # serviceaccount:namespace:operator", operatorNamespace+":rook-ceph-system")
manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-default # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-default")
manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-mgr # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-mgr")
manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-osd # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-osd")
manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-rgw # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-rgw")
Expand Down

0 comments on commit d77cee7

Please sign in to comment.