diff --git a/.gitignore b/.gitignore index ab71a3e4..e2883f72 100644 --- a/.gitignore +++ b/.gitignore @@ -1,6 +1,9 @@ bin/ testbin/ +catalog +catalog.Dockerfile + # Created by https://www.toptal.com/developers/gitignore/api/go,vim,visualstudiocode,git # Edit at https://www.toptal.com/developers/gitignore?templates=go,vim,visualstudiocode,git @@ -74,6 +77,9 @@ tags # Local History for Visual Studio Code .history/ +### IntelliJ ### +.idea + # Built Visual Studio Code Extensions *.vsix diff --git a/.golangci.yaml b/.golangci.yaml index 4a0a9924..506d3dbf 100644 --- a/.golangci.yaml +++ b/.golangci.yaml @@ -7,12 +7,9 @@ run: linters: disable-all: true enable: - - deadcode - errcheck - gosimple - ineffassign - staticcheck - - structcheck - unused - - varcheck - revive diff --git a/Dockerfile b/Dockerfile index 024d341b..faa7bcdd 100644 --- a/Dockerfile +++ b/Dockerfile @@ -17,8 +17,6 @@ COPY controllers/ controllers/ COPY config/ config/ COPY pkg/ pkg/ COPY service/ service/ -# Run tests and linting -RUN make go-test # Build RUN make go-build diff --git a/Makefile b/Makefile index c5233a16..55ba6813 100644 --- a/Makefile +++ b/Makefile @@ -3,7 +3,6 @@ include hack/make-project-vars.mk include hack/make-tools.mk include hack/make-bundle-vars.mk - # Setting SHELL to bash allows bash commands to be executed by recipes. # This is a requirement for 'setup-envtest.sh' in the test target. # Options are set to exit when a recipe line exits non-zero or a piped command fails. @@ -55,15 +54,15 @@ vet: ## Run go vet against code. go vet ./... lint: ## Run golangci-lint against code. - docker run --rm -v $(PROJECT_DIR):/app:Z -w /app $(GO_LINT_IMG) golangci-lint run ./... + $(IMAGE_BUILD_CMD) run --rm -v $(PROJECT_DIR):/app -w /app $(GO_LINT_IMG) golangci-lint run ./... godeps-update: ## Run go mod tidy & vendor go mod tidy && go mod vendor -test-setup: godeps-update generate fmt vet ## Run setup targets for tests +test-setup: godeps-update generate fmt vet envtest ## Run setup targets for tests go-test: ## Run go test against code. - ./hack/go-test.sh + KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(BIN_DIR) -p path)" go test -coverprofile cover.out `go list ./... | grep -v "e2e"` test: test-setup go-test ## Run go unit tests. @@ -82,11 +81,11 @@ go-build: ## Run go build against code. run: manifests generate fmt vet ## Run a controller from your host. go run ./main.go -container-build: test-setup ## Build container image with the manager. - docker build -t ${IMG} . +container-build: test ## Build container image with the manager. + $(IMAGE_BUILD_CMD) build --platform="linux/amd64" -t ${IMG} . container-push: ## Push container image with the manager. - docker push ${IMG} + $(IMAGE_BUILD_CMD) push ${IMG} ##@ Deployment @@ -117,7 +116,8 @@ bundle: manifests kustomize operator-sdk yq ## Generate bundle manifests and met rm -rf ./bundle $(OPERATOR_SDK) generate kustomize manifests -q cd config/manager && $(KUSTOMIZE) edit set image controller=$(IMG) - cd config/console && $(KUSTOMIZE) edit set image ocs-client-operator-console=$(OCS_CLIENT_CONSOLE_IMG) + cd config/console && $(KUSTOMIZE) edit set image ocs-client-operator-console=$(OCS_CLIENT_CONSOLE_IMG) && \ + $(KUSTOMIZE) edit set nameprefix $(OPERATOR_NAMEPREFIX) cd config/default && \ $(KUSTOMIZE) edit set image kube-rbac-proxy=$(RBAC_PROXY_IMG) && \ $(KUSTOMIZE) edit set namespace $(OPERATOR_NAMESPACE) && \ @@ -128,27 +128,27 @@ bundle: manifests kustomize operator-sdk yq ## Generate bundle manifests and met --patch '[{"op": "replace", "path": "/spec/replaces", "value": "$(REPLACES)"}]' $(KUSTOMIZE) build $(MANIFEST_PATH) | sed "s|STATUS_REPORTER_IMAGE_VALUE|$(IMG)|g" | awk '{print}'| \ $(OPERATOR_SDK) generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) --extra-service-accounts="$$($(KUSTOMIZE) build $(MANIFEST_PATH) | $(YQ) 'select(.kind == "ServiceAccount") | .metadata.name' -N | paste -sd "," -)" - sed -i "s|packageName:.*|packageName: ${CSI_ADDONS_PACKAGE_NAME}|g" "config/metadata/dependencies.yaml" - sed -i "s|version:.*|version: "${CSI_ADDONS_PACKAGE_VERSION}"|g" "config/metadata/dependencies.yaml" + yq -i '.dependencies[0].value.packageName = "'${CSI_ADDONS_PACKAGE_NAME}'"' config/metadata/dependencies.yaml + yq -i '.dependencies[0].value.version = "'${CSI_ADDONS_PACKAGE_VERSION}'"' config/metadata/dependencies.yaml cp config/metadata/* bundle/metadata/ $(OPERATOR_SDK) bundle validate ./bundle .PHONY: bundle-build bundle-build: bundle ## Build the bundle image. - docker build -f bundle.Dockerfile -t $(BUNDLE_IMG) . + $(IMAGE_BUILD_CMD) build --platform="linux/amd64" -f bundle.Dockerfile -t $(BUNDLE_IMG) . .PHONY: bundle-push bundle-push: ## Push the bundle image. - docker push $(BUNDLE_IMG) + $(IMAGE_BUILD_CMD) push $(BUNDLE_IMG) # Build a catalog image by adding bundle images to an empty catalog using the operator package manager tool, 'opm'. # This recipe invokes 'opm' in 'semver' bundle add mode. For more information on add modes, see: # https://github.com/operator-framework/community-operators/blob/7f1438c/docs/packaging-operator.md#updating-your-existing-operator .PHONY: catalog-build catalog-build: opm ## Build a catalog image. - $(OPM) index add --permissive --container-tool docker --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLE_IMGS) $(FROM_INDEX_OPT) + ./hack/build-catalog.sh # Push the catalog image. .PHONY: catalog-push catalog-push: ## Push a catalog image. - docker push $(CATALOG_IMG) + $(IMAGE_BUILD_CMD) push $(CATALOG_IMG) diff --git a/PROJECT b/PROJECT index 79f7b584..85efe3a6 100644 --- a/PROJECT +++ b/PROJECT @@ -20,14 +20,6 @@ resources: kind: StorageClient path: github.com/red-hat-storage/ocs-client-operator/api/v1alpha1 version: v1alpha1 -- api: - crdVersion: v1 - namespaced: true - domain: openshift.io - group: ocs - kind: StorageClassClaim - path: github.com/red-hat-storage/ocs-client-operator/api/v1alpha1 - version: v1alpha1 - api: crdVersion: v1 namespaced: true diff --git a/api/v1alpha1/storageclaim_types.go b/api/v1alpha1/storageclaim_types.go index 2d52d4d4..da6687d8 100644 --- a/api/v1alpha1/storageclaim_types.go +++ b/api/v1alpha1/storageclaim_types.go @@ -44,18 +44,13 @@ type StorageClaimStatus struct { Phase storageClaimState `json:"phase,omitempty"` } -type StorageClientNamespacedName struct { - Name string `json:"name"` - Namespace string `json:"namespace"` -} - // StorageClaimSpec defines the desired state of StorageClaim type StorageClaimSpec struct { - //+kubebuilder:validation:Enum=block;sharedfile - Type string `json:"type"` - EncryptionMethod string `json:"encryptionMethod,omitempty"` - StorageProfile string `json:"storageProfile,omitempty"` - StorageClient *StorageClientNamespacedName `json:"storageClient"` + //+kubebuilder:validation:XValidation:rule="self.lowerAscii()=='block'||self.lowerAscii()=='sharedfile'",message="value should be either 'sharedfile' or 'block'" + Type string `json:"type"` + EncryptionMethod string `json:"encryptionMethod,omitempty"` + StorageProfile string `json:"storageProfile,omitempty"` + StorageClient string `json:"storageClient"` } //+kubebuilder:object:root=true @@ -63,8 +58,7 @@ type StorageClaimSpec struct { //+kubebuilder:resource:scope=Cluster //+kubebuilder:printcolumn:name="StorageType",type="string",JSONPath=".spec.type" //+kubebuilder:printcolumn:name="StorageProfile",type="string",JSONPath=".spec.storageProfile" -//+kubebuilder:printcolumn:name="StorageClientName",type="string",JSONPath=".spec.storageClient.name" -//+kubebuilder:printcolumn:name="StorageClientNamespace",type="string",JSONPath=".spec.storageClient.namespace" +//+kubebuilder:printcolumn:name="StorageClientName",type="string",JSONPath=".spec.storageClient" //+kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase" // StorageClaim is the Schema for the storageclaims API diff --git a/api/v1alpha1/storageclassclaim_types.go b/api/v1alpha1/storageclassclaim_types.go deleted file mode 100644 index ba852852..00000000 --- a/api/v1alpha1/storageclassclaim_types.go +++ /dev/null @@ -1,89 +0,0 @@ -/* -Copyright 2022 Red Hat, Inc. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import ( - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -type storageClassClaimState string - -const ( - // StorageClassClaimInitializing represents Initializing state of StorageClassClaim - StorageClassClaimInitializing storageClassClaimState = "Initializing" - // StorageClassClaimValidating represents Validating state of StorageClassClaim - StorageClassClaimValidating storageClassClaimState = "Validating" - // StorageClassClaimFailed represents Failed state of StorageClassClaim - StorageClassClaimFailed storageClassClaimState = "Failed" - // StorageClassClaimCreating represents Configuring state of StorageClassClaim - StorageClassClaimCreating storageClassClaimState = "Creating" - // StorageClassClaimConfiguring represents Configuring state of StorageClassClaim - StorageClassClaimConfiguring storageClassClaimState = "Configuring" - // StorageClassClaimReady represents Ready state of StorageClassClaim - StorageClassClaimReady storageClassClaimState = "Ready" - // StorageClassClaimDeleting represents Deleting state of StorageClassClaim - StorageClassClaimDeleting storageClassClaimState = "Deleting" -) - -// StorageClassClaimStatus defines the observed state of StorageClassClaim -type StorageClassClaimStatus struct { - Phase storageClassClaimState `json:"phase,omitempty"` - SecretNames []string `json:"secretNames,omitempty"` -} - -// StorageClassClaimSpec defines the desired state of StorageClassClaim -type StorageClassClaimSpec struct { - //+kubebuilder:validation:Enum=blockpool;sharedfilesystem - Type string `json:"type"` - EncryptionMethod string `json:"encryptionMethod,omitempty"` - StorageProfile string `json:"storageProfile,omitempty"` - StorageClient *StorageClientNamespacedName `json:"storageClient"` -} - -//+kubebuilder:object:root=true -//+kubebuilder:subresource:status -//+kubebuilder:resource:scope=Cluster -//+kubebuilder:printcolumn:name="StorageType",type="string",JSONPath=".spec.type" -//+kubebuilder:printcolumn:name="StorageProfile",type="string",JSONPath=".spec.storageProfile" -//+kubebuilder:printcolumn:name="StorageClientName",type="string",JSONPath=".spec.storageClient.name" -//+kubebuilder:printcolumn:name="StorageClientNamespace",type="string",JSONPath=".spec.storageClient.namespace" -//+kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase" - -// StorageClassClaim is the Schema for the storageclassclaims API -//+kubebuilder:deprecatedversion:warning="StorageClassClaim API is deprecated and will be removed in future version, please use StorageClaim API instead." -type StorageClassClaim struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata,omitempty"` - - //+kubebuilder:validation:Required - //+kubebuilder:validation:XValidation:rule="oldSelf == self",message="spec is immutable" - Spec StorageClassClaimSpec `json:"spec"` - Status StorageClassClaimStatus `json:"status,omitempty"` -} - -//+kubebuilder:object:root=true - -// StorageClassClaimList contains a list of StorageClassClaim -type StorageClassClaimList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata,omitempty"` - Items []StorageClassClaim `json:"items"` -} - -func init() { - SchemeBuilder.Register(&StorageClassClaim{}, &StorageClassClaimList{}) -} diff --git a/api/v1alpha1/storageclient_types.go b/api/v1alpha1/storageclient_types.go index 28147c76..6ca4d7c8 100644 --- a/api/v1alpha1/storageclient_types.go +++ b/api/v1alpha1/storageclient_types.go @@ -56,6 +56,7 @@ type StorageClientStatus struct { //+kubebuilder:object:root=true //+kubebuilder:subresource:status +//+kubebuilder:resource:scope=Cluster //+kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase" //+kubebuilder:printcolumn:name="consumer",type="string",JSONPath=".status.id" diff --git a/api/v1alpha1/zz_generated.deepcopy.go b/api/v1alpha1/zz_generated.deepcopy.go index c5b6e024..ae9507fe 100644 --- a/api/v1alpha1/zz_generated.deepcopy.go +++ b/api/v1alpha1/zz_generated.deepcopy.go @@ -30,7 +30,7 @@ func (in *StorageClaim) DeepCopyInto(out *StorageClaim) { *out = *in out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - in.Spec.DeepCopyInto(&out.Spec) + out.Spec = in.Spec out.Status = in.Status } @@ -87,11 +87,6 @@ func (in *StorageClaimList) DeepCopyObject() runtime.Object { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *StorageClaimSpec) DeepCopyInto(out *StorageClaimSpec) { *out = *in - if in.StorageClient != nil { - in, out := &in.StorageClient, &out.StorageClient - *out = new(StorageClientNamespacedName) - **out = **in - } } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageClaimSpec. @@ -119,105 +114,6 @@ func (in *StorageClaimStatus) DeepCopy() *StorageClaimStatus { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *StorageClassClaim) DeepCopyInto(out *StorageClassClaim) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - in.Spec.DeepCopyInto(&out.Spec) - in.Status.DeepCopyInto(&out.Status) -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageClassClaim. -func (in *StorageClassClaim) DeepCopy() *StorageClassClaim { - if in == nil { - return nil - } - out := new(StorageClassClaim) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *StorageClassClaim) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *StorageClassClaimList) DeepCopyInto(out *StorageClassClaimList) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ListMeta.DeepCopyInto(&out.ListMeta) - if in.Items != nil { - in, out := &in.Items, &out.Items - *out = make([]StorageClassClaim, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageClassClaimList. -func (in *StorageClassClaimList) DeepCopy() *StorageClassClaimList { - if in == nil { - return nil - } - out := new(StorageClassClaimList) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *StorageClassClaimList) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *StorageClassClaimSpec) DeepCopyInto(out *StorageClassClaimSpec) { - *out = *in - if in.StorageClient != nil { - in, out := &in.StorageClient, &out.StorageClient - *out = new(StorageClientNamespacedName) - **out = **in - } -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageClassClaimSpec. -func (in *StorageClassClaimSpec) DeepCopy() *StorageClassClaimSpec { - if in == nil { - return nil - } - out := new(StorageClassClaimSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *StorageClassClaimStatus) DeepCopyInto(out *StorageClassClaimStatus) { - *out = *in - if in.SecretNames != nil { - in, out := &in.SecretNames, &out.SecretNames - *out = make([]string, len(*in)) - copy(*out, *in) - } -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageClassClaimStatus. -func (in *StorageClassClaimStatus) DeepCopy() *StorageClassClaimStatus { - if in == nil { - return nil - } - out := new(StorageClassClaimStatus) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *StorageClient) DeepCopyInto(out *StorageClient) { *out = *in @@ -277,21 +173,6 @@ func (in *StorageClientList) DeepCopyObject() runtime.Object { return nil } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *StorageClientNamespacedName) DeepCopyInto(out *StorageClientNamespacedName) { - *out = *in -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageClientNamespacedName. -func (in *StorageClientNamespacedName) DeepCopy() *StorageClientNamespacedName { - if in == nil { - return nil - } - out := new(StorageClientNamespacedName) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *StorageClientSpec) DeepCopyInto(out *StorageClientSpec) { *out = *in diff --git a/bundle/manifests/ocs-client-operator-config_v1_configmap.yaml b/bundle/manifests/ocs-client-operator-config_v1_configmap.yaml new file mode 100644 index 00000000..360b979a --- /dev/null +++ b/bundle/manifests/ocs-client-operator-config_v1_configmap.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: ocs-client-operator-config diff --git a/bundle/manifests/ocs-client-operator-csi-images_v1_configmap.yaml b/bundle/manifests/ocs-client-operator-csi-images_v1_configmap.yaml index 3bcfe663..450c34b3 100644 --- a/bundle/manifests/ocs-client-operator-csi-images_v1_configmap.yaml +++ b/bundle/manifests/ocs-client-operator-csi-images_v1_configmap.yaml @@ -2,25 +2,35 @@ apiVersion: v1 data: csi-images.yaml: | --- - - version: v4.11 + - version: v4.14 containerImages: - provisionerImageURL: "registry.k8s.io/sig-storage/csi-provisioner:v3.3.0" - attacherImageURL: "registry.k8s.io/sig-storage/csi-attacher:v4.0.0" - resizerImageURL: "registry.k8s.io/sig-storage/csi-resizer:v1.6.0" - snapshotterImageURL: "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" - driverRegistrarImageURL: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1" - cephCSIImageURL: "quay.io/cephcsi/cephcsi:v3.7.2" - csiaddonsImageURL: "quay.io/csiaddons/k8s-sidecar:v0.5.0" + provisionerImageURL: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" + attacherImageURL: "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" + resizerImageURL: "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" + snapshotterImageURL: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" + driverRegistrarImageURL: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" + cephCSIImageURL: "quay.io/cephcsi/cephcsi:v3.10.2" + csiaddonsImageURL: "quay.io/csiaddons/k8s-sidecar:v0.8.0" - - version: v4.12 + - version: v4.15 containerImages: - provisionerImageURL: "registry.k8s.io/sig-storage/csi-provisioner:v3.3.0" - attacherImageURL: "registry.k8s.io/sig-storage/csi-attacher:v4.0.0" - resizerImageURL: "registry.k8s.io/sig-storage/csi-resizer:v1.6.0" - snapshotterImageURL: "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" - driverRegistrarImageURL: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1" - cephCSIImageURL: "quay.io/cephcsi/cephcsi:v3.7.2" - csiaddonsImageURL: "quay.io/csiaddons/k8s-sidecar:v0.5.0" + provisionerImageURL: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" + attacherImageURL: "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" + resizerImageURL: "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" + snapshotterImageURL: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" + driverRegistrarImageURL: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" + cephCSIImageURL: "quay.io/cephcsi/cephcsi:v3.10.2" + csiaddonsImageURL: "quay.io/csiaddons/k8s-sidecar:v0.8.0" + + - version: v4.16 + containerImages: + provisionerImageURL: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" + attacherImageURL: "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" + resizerImageURL: "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" + snapshotterImageURL: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" + driverRegistrarImageURL: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" + cephCSIImageURL: "quay.io/cephcsi/cephcsi:v3.10.2" + csiaddonsImageURL: "quay.io/csiaddons/k8s-sidecar:v0.8.0" kind: ConfigMap metadata: name: ocs-client-operator-csi-images diff --git a/bundle/manifests/ocs-client-operator-webhook-server_v1_service.yaml b/bundle/manifests/ocs-client-operator-webhook-server_v1_service.yaml new file mode 100644 index 00000000..168e6fdd --- /dev/null +++ b/bundle/manifests/ocs-client-operator-webhook-server_v1_service.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Service +metadata: + annotations: + service.beta.openshift.io/serving-cert-secret-name: webhook-cert-secret + creationTimestamp: null + name: ocs-client-operator-webhook-server +spec: + ports: + - name: https + port: 443 + protocol: TCP + targetPort: 7443 + selector: + app: ocs-client-operator + type: ClusterIP +status: + loadBalancer: {} diff --git a/bundle/manifests/ocs-client-operator.clusterserviceversion.yaml b/bundle/manifests/ocs-client-operator.clusterserviceversion.yaml index 84a0ceb0..61c6f840 100644 --- a/bundle/manifests/ocs-client-operator.clusterserviceversion.yaml +++ b/bundle/manifests/ocs-client-operator.clusterserviceversion.yaml @@ -8,16 +8,16 @@ metadata: olm.skipRange: "" operators.operatorframework.io/builder: operator-sdk-v1.19.0+git operators.operatorframework.io/project_layout: go.kubebuilder.io/v3 - name: ocs-client-operator.v4.12.0 + name: ocs-client-operator.v4.16.0 namespace: placeholder spec: apiservicedefinitions: {} customresourcedefinitions: owned: - - description: StorageClassClaim is the Schema for the storageclassclaims API - displayName: Storage Class Claim - kind: StorageClassClaim - name: storageclassclaims.ocs.openshift.io + - description: StorageClaim is the Schema for the storageclaims API + displayName: Storage Claim + kind: StorageClaim + name: storageclaims.ocs.openshift.io version: v1alpha1 - description: StorageClient is the Schema for the storageclients API displayName: Storage Client @@ -44,6 +44,12 @@ spec: - list - update - watch + - apiGroups: + - "" + resources: + - configmaps/finalizers + verbs: + - update - apiGroups: - "" resources: @@ -67,6 +73,25 @@ spec: - patch - update - watch + - apiGroups: + - admissionregistration.k8s.io + resources: + - validatingwebhookconfigurations + verbs: + - create + - delete + - get + - list + - update + - watch + - apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - get + - list + - watch - apiGroups: - apps resources: @@ -119,27 +144,9 @@ spec: resources: - clusterversions verbs: - - create - - delete - get - list - - patch - - update - watch - - apiGroups: - - config.openshift.io - resources: - - clusterversions/finalizers - verbs: - - update - - apiGroups: - - config.openshift.io - resources: - - clusterversions/status - verbs: - - get - - patch - - update - apiGroups: - console.openshift.io resources: @@ -167,7 +174,7 @@ spec: - apiGroups: - ocs.openshift.io resources: - - storageclassclaims + - storageclaims verbs: - create - delete @@ -179,13 +186,13 @@ spec: - apiGroups: - ocs.openshift.io resources: - - storageclassclaims/finalizers + - storageclaims/finalizers verbs: - update - apiGroups: - ocs.openshift.io resources: - - storageclassclaims/status + - storageclaims/status verbs: - get - patch @@ -224,12 +231,22 @@ spec: - get - list - watch + - apiGroups: + - operators.coreos.com + resources: + - subscriptions + verbs: + - get + - list + - update + - watch - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - create + - delete - get - list - patch @@ -245,6 +262,14 @@ spec: - get - list - watch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents + verbs: + - get + - list + - watch - apiGroups: - storage.k8s.io resources: @@ -598,6 +623,7 @@ spec: verbs: - get - list + - patch - apiGroups: - "" resources: @@ -617,74 +643,14 @@ spec: serviceAccountName: ocs-client-operator-status-reporter deployments: - label: - app.kubernetes.io/name: ocs-client-operator-console - name: ocs-client-operator-console - spec: - selector: - matchLabels: - app.kubernetes.io/name: ocs-client-operator-console - strategy: {} - template: - metadata: - labels: - app.kubernetes.io/name: ocs-client-operator-console - spec: - containers: - - image: quay.io/ocs-dev/ocs-client-console:latest - livenessProbe: - httpGet: - path: /plugin-manifest.json - port: 9001 - scheme: HTTPS - initialDelaySeconds: 1000 - periodSeconds: 60 - name: ocs-client-operator-console - ports: - - containerPort: 9001 - protocol: TCP - resources: - limits: - cpu: 100m - memory: 512Mi - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: - - ALL - readOnlyRootFilesystem: true - seccompProfile: - type: RuntimeDefault - volumeMounts: - - mountPath: /var/serving-cert - name: ocs-client-operator-console-serving-cert - readOnly: true - - mountPath: /etc/nginx/nginx.conf - name: ocs-client-operator-console-nginx-conf - subPath: nginx.conf - - mountPath: /var/log/nginx - name: ocs-client-operator-console-nginx-log - - mountPath: /var/lib/nginx/tmp - name: ocs-client-operator-console-nginx-tmp - securityContext: - runAsNonRoot: true - volumes: - - name: ocs-client-operator-console-serving-cert - secret: - secretName: ocs-client-operator-console-serving-cert - - configMap: - name: ocs-client-operator-console-nginx-conf - name: ocs-client-operator-console-nginx-conf - - emptyDir: {} - name: ocs-client-operator-console-nginx-log - - emptyDir: {} - name: ocs-client-operator-console-nginx-tmp - - label: + app: ocs-client-operator control-plane: controller-manager name: ocs-client-operator-controller-manager spec: replicas: 1 selector: matchLabels: + app: ocs-client-operator control-plane: controller-manager strategy: {} template: @@ -692,6 +658,7 @@ spec: annotations: kubectl.kubernetes.io/default-container: manager labels: + app: ocs-client-operator control-plane: controller-manager spec: containers: @@ -700,7 +667,7 @@ spec: - --upstream=http://127.0.0.1:8080/ - --logtostderr=true - --v=0 - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 + image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.9.0 name: kube-rbac-proxy ports: - containerPort: 8443 @@ -757,6 +724,8 @@ spec: volumeMounts: - mountPath: /opt/config name: csi-images + - mountPath: /etc/tls/private + name: webhook-cert-secret securityContext: runAsNonRoot: true serviceAccountName: ocs-client-operator-controller-manager @@ -765,9 +734,12 @@ spec: - configMap: name: ocs-client-operator-csi-images name: csi-images + - name: webhook-cert-secret + secret: + secretName: webhook-cert-secret - label: app.kubernetes.io/name: ocs-client-operator-console - name: console + name: ocs-client-operator-console spec: selector: matchLabels: @@ -934,4 +906,4 @@ spec: maturity: alpha provider: name: Red Hat - version: 4.12.0 + version: 4.16.0 diff --git a/bundle/manifests/ocs.openshift.io_storageclassclaims.yaml b/bundle/manifests/ocs.openshift.io_storageclaims.yaml similarity index 63% rename from bundle/manifests/ocs.openshift.io_storageclassclaims.yaml rename to bundle/manifests/ocs.openshift.io_storageclaims.yaml index 82ef4ce5..23ed6f90 100644 --- a/bundle/manifests/ocs.openshift.io_storageclassclaims.yaml +++ b/bundle/manifests/ocs.openshift.io_storageclaims.yaml @@ -2,16 +2,16 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.4.1 + controller-gen.kubebuilder.io/version: v0.9.2 creationTimestamp: null - name: storageclassclaims.ocs.openshift.io + name: storageclaims.ocs.openshift.io spec: group: ocs.openshift.io names: - kind: StorageClassClaim - listKind: StorageClassClaimList - plural: storageclassclaims - singular: storageclassclaim + kind: StorageClaim + listKind: StorageClaimList + plural: storageclaims + singular: storageclaim scope: Cluster versions: - additionalPrinterColumns: @@ -21,19 +21,16 @@ spec: - jsonPath: .spec.storageProfile name: StorageProfile type: string - - jsonPath: .spec.storageClient.name + - jsonPath: .spec.storageClient name: StorageClientName type: string - - jsonPath: .spec.storageClient.namespace - name: StorageClientNamespace - type: string - jsonPath: .status.phase name: Phase type: string name: v1alpha1 schema: openAPIV3Schema: - description: StorageClassClaim is the Schema for the storageclassclaims API + description: StorageClaim is the Schema for the storageclaims API properties: apiVersion: description: 'APIVersion defines the versioned schema of this representation @@ -48,40 +45,31 @@ spec: metadata: type: object spec: - description: StorageClassClaimSpec defines the desired state of StorageClassClaim + description: StorageClaimSpec defines the desired state of StorageClaim properties: encryptionMethod: type: string storageClient: - properties: - name: - type: string - namespace: - type: string - required: - - name - - namespace - type: object + type: string storageProfile: type: string type: - enum: - - blockpool - - sharedfilesystem type: string + x-kubernetes-validations: + - message: value should be either 'sharedfile' or 'block' + rule: self.lowerAscii()=='block'||self.lowerAscii()=='sharedfile' required: - storageClient - type type: object + x-kubernetes-validations: + - message: spec is immutable + rule: oldSelf == self status: - description: StorageClassClaimStatus defines the observed state of StorageClassClaim + description: StorageClaimStatus defines the observed state of StorageClaim properties: phase: type: string - secretNames: - items: - type: string - type: array type: object type: object served: true @@ -92,5 +80,5 @@ status: acceptedNames: kind: "" plural: "" - conditions: [] - storedVersions: [] + conditions: null + storedVersions: null diff --git a/bundle/manifests/ocs.openshift.io_storageclients.yaml b/bundle/manifests/ocs.openshift.io_storageclients.yaml index 9b6527b7..95e22b1d 100644 --- a/bundle/manifests/ocs.openshift.io_storageclients.yaml +++ b/bundle/manifests/ocs.openshift.io_storageclients.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.4.1 + controller-gen.kubebuilder.io/version: v0.9.2 creationTimestamp: null name: storageclients.ocs.openshift.io spec: @@ -12,7 +12,7 @@ spec: listKind: StorageClientList plural: storageclients singular: storageclient - scope: Namespaced + scope: Cluster versions: - additionalPrinterColumns: - jsonPath: .status.phase @@ -72,5 +72,5 @@ status: acceptedNames: kind: "" plural: "" - conditions: [] - storedVersions: [] + conditions: null + storedVersions: null diff --git a/bundle/metadata/dependencies.yaml b/bundle/metadata/dependencies.yaml index dcfeb21f..3a99feee 100644 --- a/bundle/metadata/dependencies.yaml +++ b/bundle/metadata/dependencies.yaml @@ -1,5 +1,5 @@ dependencies: -- type: olm.package - value: - packageName: csi-addons - version: 0.5.0 + - type: olm.package + value: + packageName: csi-addons + version: 0.8.0 diff --git a/config/console/kustomization.yaml b/config/console/kustomization.yaml index d8403715..a70695ea 100644 --- a/config/console/kustomization.yaml +++ b/config/console/kustomization.yaml @@ -12,3 +12,4 @@ images: - name: ocs-client-operator-console newName: quay.io/ocs-dev/ocs-client-console newTag: latest +namePrefix: ocs-client-operator- diff --git a/config/crd/bases/ocs.openshift.io_storageclaims.yaml b/config/crd/bases/ocs.openshift.io_storageclaims.yaml index a10dc976..da254473 100644 --- a/config/crd/bases/ocs.openshift.io_storageclaims.yaml +++ b/config/crd/bases/ocs.openshift.io_storageclaims.yaml @@ -22,12 +22,9 @@ spec: - jsonPath: .spec.storageProfile name: StorageProfile type: string - - jsonPath: .spec.storageClient.name + - jsonPath: .spec.storageClient name: StorageClientName type: string - - jsonPath: .spec.storageClient.namespace - name: StorageClientNamespace - type: string - jsonPath: .status.phase name: Phase type: string @@ -54,22 +51,14 @@ spec: encryptionMethod: type: string storageClient: - properties: - name: - type: string - namespace: - type: string - required: - - name - - namespace - type: object + type: string storageProfile: type: string type: - enum: - - block - - sharedfile type: string + x-kubernetes-validations: + - message: value should be either 'sharedfile' or 'block' + rule: self.lowerAscii()=='block'||self.lowerAscii()=='sharedfile' required: - storageClient - type diff --git a/config/crd/bases/ocs.openshift.io_storageclassclaims.yaml b/config/crd/bases/ocs.openshift.io_storageclassclaims.yaml deleted file mode 100644 index 6c4dff38..00000000 --- a/config/crd/bases/ocs.openshift.io_storageclassclaims.yaml +++ /dev/null @@ -1,99 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - annotations: - controller-gen.kubebuilder.io/version: v0.9.2 - creationTimestamp: null - name: storageclassclaims.ocs.openshift.io -spec: - group: ocs.openshift.io - names: - kind: StorageClassClaim - listKind: StorageClassClaimList - plural: storageclassclaims - singular: storageclassclaim - scope: Cluster - versions: - - additionalPrinterColumns: - - jsonPath: .spec.type - name: StorageType - type: string - - jsonPath: .spec.storageProfile - name: StorageProfile - type: string - - jsonPath: .spec.storageClient.name - name: StorageClientName - type: string - - jsonPath: .spec.storageClient.namespace - name: StorageClientNamespace - type: string - - jsonPath: .status.phase - name: Phase - type: string - deprecated: true - deprecationWarning: StorageClassClaim API is deprecated and will be removed in - future version, please use StorageClaim API instead. - name: v1alpha1 - schema: - openAPIV3Schema: - description: StorageClassClaim is the Schema for the storageclassclaims API - properties: - apiVersion: - description: 'APIVersion defines the versioned schema of this representation - of an object. Servers should convert recognized schemas to the latest - internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' - type: string - kind: - description: 'Kind is a string value representing the REST resource this - object represents. Servers may infer this from the endpoint the client - submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' - type: string - metadata: - type: object - spec: - description: StorageClassClaimSpec defines the desired state of StorageClassClaim - properties: - encryptionMethod: - type: string - storageClient: - properties: - name: - type: string - namespace: - type: string - required: - - name - - namespace - type: object - storageProfile: - type: string - type: - enum: - - blockpool - - sharedfilesystem - type: string - required: - - storageClient - - type - type: object - x-kubernetes-validations: - - message: spec is immutable - rule: oldSelf == self - status: - description: StorageClassClaimStatus defines the observed state of StorageClassClaim - properties: - phase: - type: string - secretNames: - items: - type: string - type: array - type: object - required: - - spec - type: object - served: true - storage: true - subresources: - status: {} diff --git a/config/crd/bases/ocs.openshift.io_storageclients.yaml b/config/crd/bases/ocs.openshift.io_storageclients.yaml index 766ae368..afcbcd44 100644 --- a/config/crd/bases/ocs.openshift.io_storageclients.yaml +++ b/config/crd/bases/ocs.openshift.io_storageclients.yaml @@ -13,7 +13,7 @@ spec: listKind: StorageClientList plural: storageclients singular: storageclient - scope: Namespaced + scope: Cluster versions: - additionalPrinterColumns: - jsonPath: .status.phase diff --git a/config/crd/kustomization.yaml b/config/crd/kustomization.yaml index 0fabd03c..50fbe2d3 100644 --- a/config/crd/kustomization.yaml +++ b/config/crd/kustomization.yaml @@ -3,7 +3,6 @@ # It should be run by config/default resources: - bases/ocs.openshift.io_storageclients.yaml -- bases/ocs.openshift.io_storageclassclaims.yaml - bases/ocs.openshift.io_storageclaims.yaml #+kubebuilder:scaffold:crdkustomizeresource @@ -11,14 +10,12 @@ patchesStrategicMerge: # [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix. # patches here are for enabling the conversion webhook for each CRD #- patches/webhook_in_storageclients.yaml -#- patches/webhook_in_storageclassclaims.yaml #- patches/webhook_in_storageclaims.yaml #+kubebuilder:scaffold:crdkustomizewebhookpatch # [CERTMANAGER] To enable cert-manager, uncomment all the sections with [CERTMANAGER] prefix. # patches here are for enabling the CA injection for each CRD #- patches/cainjection_in_storageclients.yaml -#- patches/cainjection_in_storageclassclaims.yaml #- patches/cainjection_in_storageclaims.yaml #+kubebuilder:scaffold:crdkustomizecainjectionpatch diff --git a/config/crd/patches/cainjection_in_storageclassclaims.yaml b/config/crd/patches/cainjection_in_storageclassclaims.yaml deleted file mode 100644 index 261b8be3..00000000 --- a/config/crd/patches/cainjection_in_storageclassclaims.yaml +++ /dev/null @@ -1,7 +0,0 @@ -# The following patch adds a directive for certmanager to inject CA into the CRD -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - annotations: - cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME) - name: storageclassclaims.ocs.openshift.io diff --git a/config/crd/patches/webhook_in_storageclassclaims.yaml b/config/crd/patches/webhook_in_storageclassclaims.yaml deleted file mode 100644 index de537a77..00000000 --- a/config/crd/patches/webhook_in_storageclassclaims.yaml +++ /dev/null @@ -1,16 +0,0 @@ -# The following patch enables a conversion webhook for the CRD -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - name: storageclassclaims.ocs.openshift.io -spec: - conversion: - strategy: Webhook - webhook: - clientConfig: - service: - namespace: system - name: webhook-service - path: /convert - conversionReviewVersions: - - v1 diff --git a/config/default/kustomization.yaml b/config/default/kustomization.yaml index 866a44ce..c6bc946b 100644 --- a/config/default/kustomization.yaml +++ b/config/default/kustomization.yaml @@ -46,7 +46,6 @@ resources: - ../rbac - ../manager - ../crd -- ../console images: - name: kube-rbac-proxy newName: registry.redhat.io/openshift4/ose-kube-rbac-proxy diff --git a/config/default/manager_auth_proxy_patch.yaml b/config/default/manager_auth_proxy_patch.yaml index c50d4743..5ff4b693 100644 --- a/config/default/manager_auth_proxy_patch.yaml +++ b/config/default/manager_auth_proxy_patch.yaml @@ -10,7 +10,7 @@ spec: spec: containers: - name: kube-rbac-proxy - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 + image: kube-rbac-proxy:latest args: - "--secure-listen-address=0.0.0.0:8443" - "--upstream=http://127.0.0.1:8080/" diff --git a/config/manager/webhook_service.yaml b/config/manager/webhook_service.yaml index dd1f468b..73fd47e4 100644 --- a/config/manager/webhook_service.yaml +++ b/config/manager/webhook_service.yaml @@ -5,6 +5,7 @@ metadata: service.beta.openshift.io/serving-cert-secret-name: webhook-cert-secret name: webhook-server namespace: system +# should be in sync with pkg/templates/webhookservice.go spec: ports: - name: https diff --git a/config/manifests/bases/ocs-client-operator.clusterserviceversion.yaml b/config/manifests/bases/ocs-client-operator.clusterserviceversion.yaml index 1f4eeb18..a3c91811 100644 --- a/config/manifests/bases/ocs-client-operator.clusterserviceversion.yaml +++ b/config/manifests/bases/ocs-client-operator.clusterserviceversion.yaml @@ -16,11 +16,6 @@ spec: kind: StorageClaim name: storageclaims.ocs.openshift.io version: v1alpha1 - - description: StorageClassClaim is the Schema for the storageclassclaims API - displayName: Storage Class Claim - kind: StorageClassClaim - name: storageclassclaims.ocs.openshift.io - version: v1alpha1 - description: StorageClient is the Schema for the storageclients API displayName: Storage Client kind: StorageClient diff --git a/config/metadata/dependencies.yaml b/config/metadata/dependencies.yaml index dcfeb21f..3a99feee 100644 --- a/config/metadata/dependencies.yaml +++ b/config/metadata/dependencies.yaml @@ -1,5 +1,5 @@ dependencies: -- type: olm.package - value: - packageName: csi-addons - version: 0.5.0 + - type: olm.package + value: + packageName: csi-addons + version: 0.8.0 diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml index f65a9b5d..a5055aca 100644 --- a/config/rbac/role.yaml +++ b/config/rbac/role.yaml @@ -16,6 +16,12 @@ rules: - list - update - watch +- apiGroups: + - "" + resources: + - configmaps/finalizers + verbs: + - update - apiGroups: - "" resources: @@ -45,6 +51,7 @@ rules: - validatingwebhookconfigurations verbs: - create + - delete - get - list - update @@ -109,27 +116,9 @@ rules: resources: - clusterversions verbs: - - create - - delete - get - list - - patch - - update - watch -- apiGroups: - - config.openshift.io - resources: - - clusterversions/finalizers - verbs: - - update -- apiGroups: - - config.openshift.io - resources: - - clusterversions/status - verbs: - - get - - patch - - update - apiGroups: - console.openshift.io resources: @@ -180,32 +169,6 @@ rules: - get - patch - update -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims/finalizers - verbs: - - update -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims/status - verbs: - - get - - patch - - update - apiGroups: - ocs.openshift.io resources: @@ -255,6 +218,7 @@ rules: - securitycontextconstraints verbs: - create + - delete - get - list - patch diff --git a/config/rbac/storageclassclaim_editor_role.yaml b/config/rbac/storageclassclaim_editor_role.yaml deleted file mode 100644 index dd12c13d..00000000 --- a/config/rbac/storageclassclaim_editor_role.yaml +++ /dev/null @@ -1,31 +0,0 @@ -# permissions for end users to edit storageclassclaims. -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - labels: - app.kubernetes.io/name: clusterrole - app.kubernetes.io/instance: storageclassclaim-editor-role - app.kubernetes.io/component: rbac - app.kubernetes.io/created-by: ocs-client-operator - app.kubernetes.io/part-of: ocs-client-operator - app.kubernetes.io/managed-by: kustomize - name: storageclassclaim-editor-role -rules: -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims/status - verbs: - - get diff --git a/config/rbac/storageclassclaim_viewer_role.yaml b/config/rbac/storageclassclaim_viewer_role.yaml deleted file mode 100644 index 8e1307e9..00000000 --- a/config/rbac/storageclassclaim_viewer_role.yaml +++ /dev/null @@ -1,27 +0,0 @@ -# permissions for end users to view storageclassclaims. -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - labels: - app.kubernetes.io/name: clusterrole - app.kubernetes.io/instance: storageclassclaim-viewer-role - app.kubernetes.io/component: rbac - app.kubernetes.io/created-by: ocs-client-operator - app.kubernetes.io/part-of: ocs-client-operator - app.kubernetes.io/managed-by: kustomize - name: storageclassclaim-viewer-role -rules: -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims - verbs: - - get - - list - - watch -- apiGroups: - - ocs.openshift.io - resources: - - storageclassclaims/status - verbs: - - get diff --git a/config/samples/kustomization.yaml b/config/samples/kustomization.yaml index d1194ae9..7adceffc 100644 --- a/config/samples/kustomization.yaml +++ b/config/samples/kustomization.yaml @@ -1,6 +1,5 @@ ## Append samples you want in your CSV to this file as resources ## resources: - ocs_v1alpha1_storageclient.yaml -- ocs_v1alpha1_storageclassclaim.yaml - ocs_v1alpha1_storageclaim.yaml #+kubebuilder:scaffold:manifestskustomizesamples diff --git a/config/samples/ocs_v1alpha1_storageclassclaim.yaml b/config/samples/ocs_v1alpha1_storageclassclaim.yaml deleted file mode 100644 index c3bbe8f7..00000000 --- a/config/samples/ocs_v1alpha1_storageclassclaim.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: ocs.openshift.io/v1alpha1 -kind: StorageClassClaim -metadata: - labels: - app.kubernetes.io/name: storageclassclaim - app.kubernetes.io/instance: storageclassclaim-sample - app.kubernetes.io/part-of: ocs-client-operator - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/created-by: ocs-client-operator - name: storageclassclaim-sample -spec: - # TODO(user): Add fields here diff --git a/controllers/clusterversion_controller.go b/controllers/clusterversion_controller.go deleted file mode 100644 index 076210aa..00000000 --- a/controllers/clusterversion_controller.go +++ /dev/null @@ -1,609 +0,0 @@ -/* -Copyright 2023 Red Hat OpenShift Data Foundation. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package controllers - -import ( - "bytes" - "context" - "fmt" - "strconv" - "strings" - - // The embed package is required for the prometheus rule files - _ "embed" - - "github.com/red-hat-storage/ocs-client-operator/pkg/console" - "github.com/red-hat-storage/ocs-client-operator/pkg/csi" - "github.com/red-hat-storage/ocs-client-operator/pkg/templates" - "github.com/red-hat-storage/ocs-client-operator/pkg/utils" - - "github.com/go-logr/logr" - configv1 "github.com/openshift/api/config/v1" - secv1 "github.com/openshift/api/security/v1" - opv1a1 "github.com/operator-framework/api/pkg/operators/v1alpha1" - monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" - admrv1 "k8s.io/api/admissionregistration/v1" - appsv1 "k8s.io/api/apps/v1" - corev1 "k8s.io/api/core/v1" - extv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/types" - k8sYAML "k8s.io/apimachinery/pkg/util/yaml" - "k8s.io/klog/v2" - ctrl "sigs.k8s.io/controller-runtime" - "sigs.k8s.io/controller-runtime/pkg/builder" - "sigs.k8s.io/controller-runtime/pkg/client" - "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" - "sigs.k8s.io/controller-runtime/pkg/handler" - "sigs.k8s.io/controller-runtime/pkg/log" - "sigs.k8s.io/controller-runtime/pkg/predicate" - "sigs.k8s.io/controller-runtime/pkg/reconcile" -) - -//go:embed pvc-rules.yaml -var pvcPrometheusRules string - -const ( - operatorConfigMapName = "ocs-client-operator-config" - // ClusterVersionName is the name of the ClusterVersion object in the - // openshift cluster. - clusterVersionName = "version" - deployCSIKey = "DEPLOY_CSI" - subscriptionLabelKey = "managed-by" - subscriptionLabelValue = "webhook.subscription.ocs.openshift.io" -) - -// ClusterVersionReconciler reconciles a ClusterVersion object -type ClusterVersionReconciler struct { - client.Client - OperatorDeployment *appsv1.Deployment - OperatorNamespace string - ConsolePort int32 - Scheme *runtime.Scheme - - log logr.Logger - ctx context.Context - consoleDeployment *appsv1.Deployment - cephFSDeployment *appsv1.Deployment - cephFSDaemonSet *appsv1.DaemonSet - rbdDeployment *appsv1.Deployment - rbdDaemonSet *appsv1.DaemonSet - scc *secv1.SecurityContextConstraints -} - -// SetupWithManager sets up the controller with the Manager. -func (c *ClusterVersionReconciler) SetupWithManager(mgr ctrl.Manager) error { - clusterVersionPredicates := builder.WithPredicates( - predicate.GenerationChangedPredicate{}, - ) - - configMapPredicates := builder.WithPredicates( - predicate.NewPredicateFuncs( - func(client client.Object) bool { - namespace := client.GetNamespace() - name := client.GetName() - return ((namespace == c.OperatorNamespace) && (name == operatorConfigMapName)) - }, - ), - ) - // Reconcile the ClusterVersion object when the operator config map is updated - enqueueClusterVersionRequest := handler.EnqueueRequestsFromMapFunc( - func(_ context.Context, _ client.Object) []reconcile.Request { - return []reconcile.Request{{ - NamespacedName: types.NamespacedName{ - Name: clusterVersionName, - }, - }} - }, - ) - - subscriptionPredicates := builder.WithPredicates( - predicate.NewPredicateFuncs( - func(client client.Object) bool { - return client.GetNamespace() == c.OperatorNamespace - }, - ), - predicate.LabelChangedPredicate{}, - ) - - webhookPredicates := builder.WithPredicates( - predicate.NewPredicateFuncs( - func(client client.Object) bool { - return client.GetName() == templates.SubscriptionWebhookName - }, - ), - ) - - return ctrl.NewControllerManagedBy(mgr). - For(&configv1.ClusterVersion{}, clusterVersionPredicates). - Watches(&corev1.ConfigMap{}, enqueueClusterVersionRequest, configMapPredicates). - Watches(&extv1.CustomResourceDefinition{}, enqueueClusterVersionRequest, builder.OnlyMetadata). - Watches(&opv1a1.Subscription{}, enqueueClusterVersionRequest, subscriptionPredicates). - Watches(&admrv1.ValidatingWebhookConfiguration{}, enqueueClusterVersionRequest, webhookPredicates). - Complete(c) -} - -//+kubebuilder:rbac:groups=apiextensions.k8s.io,resources=customresourcedefinitions,verbs=get;list;watch -//+kubebuilder:rbac:groups=config.openshift.io,resources=clusterversions,verbs=get;list;watch;create;update;patch;delete -//+kubebuilder:rbac:groups=config.openshift.io,resources=clusterversions/status,verbs=get;update;patch -//+kubebuilder:rbac:groups=config.openshift.io,resources=clusterversions/finalizers,verbs=update -//+kubebuilder:rbac:groups="apps",resources=deployments,verbs=get;list;watch;create;update;patch;delete -//+kubebuilder:rbac:groups="apps",resources=deployments/finalizers,verbs=update -//+kubebuilder:rbac:groups="apps",resources=daemonsets,verbs=get;list;watch;create;update;patch;delete -//+kubebuilder:rbac:groups="apps",resources=daemonsets/finalizers,verbs=update -//+kubebuilder:rbac:groups="storage.k8s.io",resources=csidrivers,verbs=get;list;watch;create;update;patch;delete -//+kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch;create;update;delete -//+kubebuilder:rbac:groups=security.openshift.io,resources=securitycontextconstraints,verbs=get;list;watch;create;patch;update -//+kubebuilder:rbac:groups=monitoring.coreos.com,resources=prometheusrules,verbs=get;list;watch;create;update -//+kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch;create;update;patch;delete -//+kubebuilder:rbac:groups=console.openshift.io,resources=consoleplugins,verbs=* -//+kubebuilder:rbac:groups=operators.coreos.com,resources=subscriptions,verbs=get;list;watch;update -//+kubebuilder:rbac:groups=admissionregistration.k8s.io,resources=validatingwebhookconfigurations,verbs=get;list;update;create;watch - -// For more details, check Reconcile and its Result here: -// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/reconcile -func (c *ClusterVersionReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { - c.ctx = ctx - c.log = log.FromContext(ctx, "ClusterVersion", req) - c.log.Info("Reconciling ClusterVersion") - - if err := c.reconcileSubscriptionValidatingWebhook(); err != nil { - c.log.Error(err, "unable to register subscription validating webhook") - return ctrl.Result{}, err - } - - if err := labelClientOperatorSubscription(c); err != nil { - c.log.Error(err, "unable to label ocs client operator subscription") - return ctrl.Result{}, err - } - - if err := c.ensureConsolePlugin(); err != nil { - c.log.Error(err, "unable to deploy client console") - return ctrl.Result{}, err - } - - if deployCSI, err := c.getDeployCSIConfig(); err != nil { - c.log.Error(err, "failed to perform precheck for deploying CSI") - return ctrl.Result{}, err - } else if deployCSI { - instance := configv1.ClusterVersion{} - if err = c.Client.Get(context.TODO(), req.NamespacedName, &instance); err != nil { - return ctrl.Result{}, err - } - - if err := csi.InitializeSidecars(c.log, instance.Status.Desired.Version); err != nil { - c.log.Error(err, "unable to initialize sidecars") - return ctrl.Result{}, err - } - - c.scc = &secv1.SecurityContextConstraints{ - ObjectMeta: metav1.ObjectMeta{ - Name: csi.SCCName, - }, - } - err = c.createOrUpdate(c.scc, func() error { - // TODO: this is a hack to preserve the resourceVersion of the SCC - resourceVersion := c.scc.ResourceVersion - csi.SetSecurityContextConstraintsDesiredState(c.scc, c.OperatorNamespace) - c.scc.ResourceVersion = resourceVersion - return nil - }) - if err != nil { - c.log.Error(err, "unable to create/update SCC") - return ctrl.Result{}, err - } - - // create the monitor configmap for the csi drivers but never updates it. - // This is because the monitor configurations are added to the configmap - // when user creates storageclassclaims. - monConfigMap := &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: templates.MonConfigMapName, - Namespace: c.OperatorNamespace, - }, - Data: map[string]string{ - "config.json": "[]", - }, - } - if err := c.own(monConfigMap); err != nil { - return ctrl.Result{}, err - } - err = c.create(monConfigMap) - if err != nil && !kerrors.IsAlreadyExists(err) { - c.log.Error(err, "failed to create monitor configmap", "name", monConfigMap.Name) - return ctrl.Result{}, err - } - - // create the encryption configmap for the csi driver but never updates it. - // This is because the encryption configuration are added to the configmap - // by the users before they create the encryption storageclassclaims. - encConfigMap := &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: templates.EncryptionConfigMapName, - Namespace: c.OperatorNamespace, - }, - Data: map[string]string{ - "config.json": "[]", - }, - } - if err := c.own(encConfigMap); err != nil { - return ctrl.Result{}, err - } - err = c.create(encConfigMap) - if err != nil && !kerrors.IsAlreadyExists(err) { - c.log.Error(err, "failed to create monitor configmap", "name", encConfigMap.Name) - return ctrl.Result{}, err - } - - c.cephFSDeployment = &appsv1.Deployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: csi.CephFSDeploymentName, - Namespace: c.OperatorNamespace, - }, - } - err = c.createOrUpdate(c.cephFSDeployment, func() error { - if err := c.own(c.cephFSDeployment); err != nil { - return err - } - csi.SetCephFSDeploymentDesiredState(c.cephFSDeployment) - return nil - }) - if err != nil { - c.log.Error(err, "failed to create/update cephfs deployment") - return ctrl.Result{}, err - } - - c.cephFSDaemonSet = &appsv1.DaemonSet{ - ObjectMeta: metav1.ObjectMeta{ - Name: csi.CephFSDaemonSetName, - Namespace: c.OperatorNamespace, - }, - } - err = c.createOrUpdate(c.cephFSDaemonSet, func() error { - if err := c.own(c.cephFSDaemonSet); err != nil { - return err - } - csi.SetCephFSDaemonSetDesiredState(c.cephFSDaemonSet) - return nil - }) - if err != nil { - c.log.Error(err, "failed to create/update cephfs daemonset") - return ctrl.Result{}, err - } - - c.rbdDeployment = &appsv1.Deployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: csi.RBDDeploymentName, - Namespace: c.OperatorNamespace, - }, - } - err = c.createOrUpdate(c.rbdDeployment, func() error { - if err := c.own(c.rbdDeployment); err != nil { - return err - } - csi.SetRBDDeploymentDesiredState(c.rbdDeployment) - return nil - }) - if err != nil { - c.log.Error(err, "failed to create/update rbd deployment") - return ctrl.Result{}, err - } - - c.rbdDaemonSet = &appsv1.DaemonSet{ - ObjectMeta: metav1.ObjectMeta{ - Name: csi.RBDDaemonSetName, - Namespace: c.OperatorNamespace, - }, - } - err = c.createOrUpdate(c.rbdDaemonSet, func() error { - if err := c.own(c.rbdDaemonSet); err != nil { - return err - } - csi.SetRBDDaemonSetDesiredState(c.rbdDaemonSet) - return nil - }) - if err != nil { - c.log.Error(err, "failed to create/update rbd daemonset") - return ctrl.Result{}, err - } - - // Need to handle deletion of the csiDriver object, we cannot set - // ownerReference on it as its cluster scoped resource - cephfsCSIDriver := templates.CephFSCSIDriver.DeepCopy() - cephfsCSIDriver.ObjectMeta.Name = csi.GetCephFSDriverName() - err = csi.CreateCSIDriver(c.ctx, c.Client, cephfsCSIDriver) - if err != nil { - c.log.Error(err, "unable to create cephfs CSIDriver") - return ctrl.Result{}, err - } - - rbdCSIDriver := templates.RbdCSIDriver.DeepCopy() - rbdCSIDriver.ObjectMeta.Name = csi.GetRBDDriverName() - err = csi.CreateCSIDriver(c.ctx, c.Client, rbdCSIDriver) - if err != nil { - c.log.Error(err, "unable to create rbd CSIDriver") - return ctrl.Result{}, err - } - - prometheusRule := &monitoringv1.PrometheusRule{} - err = k8sYAML.NewYAMLOrJSONDecoder(bytes.NewBufferString(string(pvcPrometheusRules)), 1000).Decode(prometheusRule) - if err != nil { - c.log.Error(err, "Unable to retrieve prometheus rules.", "prometheusRule", klog.KRef(prometheusRule.Namespace, prometheusRule.Name)) - return ctrl.Result{}, err - } - - operatorConfig, err := c.getOperatorConfig() - if err != nil { - return ctrl.Result{}, err - } - prometheusRule.SetNamespace(c.OperatorNamespace) - - err = c.createOrUpdate(prometheusRule, func() error { - applyLabels(operatorConfig.Data["OCS_METRICS_LABELS"], &prometheusRule.ObjectMeta) - return c.own(prometheusRule) - }) - if err != nil { - c.log.Error(err, "failed to create/update prometheus rules") - return ctrl.Result{}, err - } - - c.log.Info("prometheus rules deployed", "prometheusRule", klog.KRef(prometheusRule.Namespace, prometheusRule.Name)) - } - - return ctrl.Result{}, nil -} - -func (c *ClusterVersionReconciler) createOrUpdate(obj client.Object, f controllerutil.MutateFn) error { - result, err := controllerutil.CreateOrUpdate(c.ctx, c.Client, obj, f) - if err != nil { - return err - } - c.log.Info("successfully created or updated", "operation", result, "name", obj.GetName()) - return nil -} - -func (c *ClusterVersionReconciler) own(obj client.Object) error { - return controllerutil.SetControllerReference(c.OperatorDeployment, obj, c.Client.Scheme()) -} - -func (c *ClusterVersionReconciler) create(obj client.Object) error { - return c.Client.Create(c.ctx, obj) -} - -// applyLabels adds labels to object meta, overwriting keys that are already defined. -func applyLabels(label string, t *metav1.ObjectMeta) { - // Create a map to store the configuration - promLabel := make(map[string]string) - - labels := strings.Split(label, "\n") - // Loop through the lines and extract key-value pairs - for _, line := range labels { - if len(line) == 0 { - continue - } - parts := strings.SplitN(line, ":", 2) - key := strings.TrimSpace(parts[0]) - value := strings.TrimSpace(parts[1]) - promLabel[key] = value - } - - t.Labels = promLabel -} - -func (c *ClusterVersionReconciler) getOperatorConfig() (*corev1.ConfigMap, error) { - cm := &corev1.ConfigMap{} - err := c.Client.Get(c.ctx, types.NamespacedName{Name: operatorConfigMapName, Namespace: c.OperatorNamespace}, cm) - if err != nil && !kerrors.IsNotFound(err) { - return nil, err - } - return cm, nil -} - -func (c *ClusterVersionReconciler) ensureConsolePlugin() error { - c.consoleDeployment = &appsv1.Deployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: console.DeploymentName, - Namespace: c.OperatorNamespace, - }, - } - - err := c.Client.Get(c.ctx, types.NamespacedName{ - Name: console.DeploymentName, - Namespace: c.OperatorNamespace, - }, c.consoleDeployment) - if err != nil { - c.log.Error(err, "failed to get the deployment for the console") - return err - } - - nginxConf := console.GetNginxConf() - nginxConfigMap := &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: console.NginxConfigMapName, - Namespace: c.OperatorNamespace, - }, - Data: map[string]string{ - "nginx.conf": nginxConf, - }, - } - err = c.createOrUpdate(nginxConfigMap, func() error { - if consoleConfigMapData := nginxConfigMap.Data["nginx.conf"]; consoleConfigMapData != nginxConf { - nginxConfigMap.Data["nginx.conf"] = nginxConf - } - return controllerutil.SetControllerReference(c.consoleDeployment, nginxConfigMap, c.Scheme) - }) - - if err != nil { - c.log.Error(err, "failed to create nginx config map") - return err - } - - consoleService := console.GetService(c.ConsolePort, c.OperatorNamespace) - - err = c.createOrUpdate(consoleService, func() error { - if err := controllerutil.SetControllerReference(c.consoleDeployment, consoleService, c.Scheme); err != nil { - return err - } - console.GetService(c.ConsolePort, c.OperatorNamespace).DeepCopyInto(consoleService) - return nil - }) - - if err != nil { - c.log.Error(err, "failed to create/update service for console") - return err - } - - consolePlugin := console.GetConsolePlugin(c.ConsolePort, c.OperatorNamespace) - err = c.createOrUpdate(consolePlugin, func() error { - // preserve the resourceVersion of the consolePlugin - resourceVersion := consolePlugin.ResourceVersion - console.GetConsolePlugin(c.ConsolePort, c.OperatorNamespace).DeepCopyInto(consolePlugin) - consolePlugin.ResourceVersion = resourceVersion - return nil - }) - - if err != nil { - c.log.Error(err, "failed to create/update consoleplugin") - return err - } - - return nil -} - -func (c *ClusterVersionReconciler) getDeployCSIConfig() (bool, error) { - operatorConfig := &corev1.ConfigMap{} - operatorConfig.Name = operatorConfigMapName - operatorConfig.Namespace = c.OperatorNamespace - if err := c.get(operatorConfig); err != nil { - return false, fmt.Errorf("failed to get operator configmap: %v", err) - } - - data := operatorConfig.Data - if data == nil { - data = map[string]string{} - } - - var deployCSI bool - var err error - if value, ok := data[deployCSIKey]; ok { - deployCSI, err = strconv.ParseBool(value) - if err != nil { - return false, fmt.Errorf("failed to parse value for %q in operator configmap as a boolean: %v", deployCSIKey, err) - } - } else { - // CSI installation is not specified explicitly in the configmap and - // behaviour is different in case we recognize the StorageCluster API on the cluster. - storageClusterCRD := &metav1.PartialObjectMetadata{} - storageClusterCRD.SetGroupVersionKind( - extv1.SchemeGroupVersion.WithKind("CustomResourceDefinition"), - ) - storageClusterCRD.Name = "storageclusters.ocs.openshift.io" - if err = c.get(storageClusterCRD); err != nil { - if !kerrors.IsNotFound(err) { - return false, fmt.Errorf("failed to verify existence of storagecluster crd: %v", err) - } - // storagecluster CRD doesn't exist - deployCSI = true - } else { - // storagecluster CRD exists and don't deploy CSI until explicitly mentioned in the configmap - deployCSI = false - } - } - - return deployCSI, nil -} - -func (c *ClusterVersionReconciler) get(obj client.Object, opts ...client.GetOption) error { - return c.Get(c.ctx, client.ObjectKeyFromObject(obj), obj, opts...) -} - -func (c *ClusterVersionReconciler) reconcileSubscriptionValidatingWebhook() error { - whConfig := &admrv1.ValidatingWebhookConfiguration{} - whConfig.Name = templates.SubscriptionWebhookName - - // TODO (lgangava): after change to configmap controller, need to remove webhook during deletion - err := c.createOrUpdate(whConfig, func() error { - - // openshift fills in the ca on finding this annotation - whConfig.Annotations = map[string]string{ - "service.beta.openshift.io/inject-cabundle": "true", - } - - var caBundle []byte - if len(whConfig.Webhooks) == 0 { - whConfig.Webhooks = make([]admrv1.ValidatingWebhook, 1) - } else { - // do not mutate CA bundle that was injected by openshift - caBundle = whConfig.Webhooks[0].ClientConfig.CABundle - } - - // webhook desired state - var wh *admrv1.ValidatingWebhook = &whConfig.Webhooks[0] - templates.SubscriptionValidatingWebhook.DeepCopyInto(wh) - - wh.Name = whConfig.Name - // only send requests received from own namespace - wh.NamespaceSelector = &metav1.LabelSelector{ - MatchLabels: map[string]string{ - "kubernetes.io/metadata.name": c.OperatorNamespace, - }, - } - // only send resources matching the label - wh.ObjectSelector = &metav1.LabelSelector{ - MatchLabels: map[string]string{ - subscriptionLabelKey: subscriptionLabelValue, - }, - } - // preserve the existing (injected) CA bundle if any - wh.ClientConfig.CABundle = caBundle - // send request to the service running in own namespace - wh.ClientConfig.Service.Namespace = c.OperatorNamespace - - return nil - }) - - if err != nil { - return err - } - - c.log.Info("successfully registered validating webhook") - return nil -} - -func labelClientOperatorSubscription(c *ClusterVersionReconciler) error { - subscriptionList := &opv1a1.SubscriptionList{} - err := c.List(c.ctx, subscriptionList, client.InNamespace(c.OperatorNamespace)) - if err != nil { - return fmt.Errorf("failed to list subscriptions") - } - - sub := utils.Find(subscriptionList.Items, func(sub *opv1a1.Subscription) bool { - return sub.Spec.Package == "ocs-client-operator" - }) - - if sub == nil { - return fmt.Errorf("failed to find subscription with ocs-client-operator package") - } - - if utils.AddLabel(sub, subscriptionLabelKey, subscriptionLabelValue) { - if err := c.Update(c.ctx, sub); err != nil { - return err - } - } - - c.log.Info("successfully labelled ocs-client-operator subscription") - return nil -} diff --git a/controllers/operatorconfigmap_controller.go b/controllers/operatorconfigmap_controller.go new file mode 100644 index 00000000..93f7c356 --- /dev/null +++ b/controllers/operatorconfigmap_controller.go @@ -0,0 +1,770 @@ +/* +Copyright 2023 Red Hat OpenShift Data Foundation. +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package controllers + +import ( + "bytes" + "context" + "fmt" + "strconv" + "strings" + + // The embed package is required for the prometheus rule files + _ "embed" + + "github.com/red-hat-storage/ocs-client-operator/api/v1alpha1" + "github.com/red-hat-storage/ocs-client-operator/pkg/console" + "github.com/red-hat-storage/ocs-client-operator/pkg/csi" + "github.com/red-hat-storage/ocs-client-operator/pkg/templates" + "github.com/red-hat-storage/ocs-client-operator/pkg/utils" + + "github.com/go-logr/logr" + configv1 "github.com/openshift/api/config/v1" + secv1 "github.com/openshift/api/security/v1" + opv1a1 "github.com/operator-framework/api/pkg/operators/v1alpha1" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + admrv1 "k8s.io/api/admissionregistration/v1" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + extv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" + kerrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + k8sYAML "k8s.io/apimachinery/pkg/util/yaml" + "k8s.io/klog/v2" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/predicate" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +//go:embed pvc-rules.yaml +var pvcPrometheusRules string + +const ( + operatorConfigMapName = "ocs-client-operator-config" + // ClusterVersionName is the name of the ClusterVersion object in the + // openshift cluster. + clusterVersionName = "version" + deployCSIKey = "DEPLOY_CSI" + subscriptionLabelKey = "managed-by" + subscriptionLabelValue = "webhook.subscription.ocs.openshift.io" + + operatorConfigMapFinalizer = "ocs-client-operator.ocs.openshift.io/storageused" +) + +// OperatorConfigMapReconciler reconciles a ClusterVersion object +type OperatorConfigMapReconciler struct { + client.Client + OperatorNamespace string + ConsolePort int32 + Scheme *runtime.Scheme + + log logr.Logger + ctx context.Context + operatorConfigMap *corev1.ConfigMap + consoleDeployment *appsv1.Deployment + cephFSDeployment *appsv1.Deployment + cephFSDaemonSet *appsv1.DaemonSet + rbdDeployment *appsv1.Deployment + rbdDaemonSet *appsv1.DaemonSet + scc *secv1.SecurityContextConstraints +} + +// SetupWithManager sets up the controller with the Manager. +func (c *OperatorConfigMapReconciler) SetupWithManager(mgr ctrl.Manager) error { + clusterVersionPredicates := builder.WithPredicates( + predicate.GenerationChangedPredicate{}, + ) + + configMapPredicates := builder.WithPredicates( + predicate.NewPredicateFuncs( + func(client client.Object) bool { + namespace := client.GetNamespace() + name := client.GetName() + return ((namespace == c.OperatorNamespace) && (name == operatorConfigMapName)) + }, + ), + ) + // Reconcile the OperatorConfigMap object when the cluster's version object is updated + enqueueConfigMapRequest := handler.EnqueueRequestsFromMapFunc( + func(_ context.Context, _ client.Object) []reconcile.Request { + return []reconcile.Request{{ + NamespacedName: types.NamespacedName{ + Name: operatorConfigMapName, + Namespace: c.OperatorNamespace, + }, + }} + }, + ) + + subscriptionPredicates := builder.WithPredicates( + predicate.NewPredicateFuncs( + func(client client.Object) bool { + return client.GetNamespace() == c.OperatorNamespace + }, + ), + predicate.LabelChangedPredicate{}, + ) + + webhookPredicates := builder.WithPredicates( + predicate.NewPredicateFuncs( + func(client client.Object) bool { + return client.GetName() == templates.SubscriptionWebhookName + }, + ), + ) + + servicePredicate := builder.WithPredicates( + predicate.NewPredicateFuncs( + func(obj client.Object) bool { + return obj.GetNamespace() == c.OperatorNamespace && obj.GetName() == templates.WebhookServiceName + }, + ), + ) + + return ctrl.NewControllerManagedBy(mgr). + For(&corev1.ConfigMap{}, configMapPredicates). + Owns(&corev1.Service{}, servicePredicate). + Watches(&configv1.ClusterVersion{}, enqueueConfigMapRequest, clusterVersionPredicates). + Watches(&extv1.CustomResourceDefinition{}, enqueueConfigMapRequest, builder.OnlyMetadata). + Watches(&opv1a1.Subscription{}, enqueueConfigMapRequest, subscriptionPredicates). + Watches(&admrv1.ValidatingWebhookConfiguration{}, enqueueConfigMapRequest, webhookPredicates). + Watches(&v1alpha1.StorageClient{}, enqueueConfigMapRequest, builder.WithPredicates(predicate.AnnotationChangedPredicate{})). + Complete(c) +} + +//+kubebuilder:rbac:groups=apiextensions.k8s.io,resources=customresourcedefinitions,verbs=get;list;watch +//+kubebuilder:rbac:groups=config.openshift.io,resources=clusterversions,verbs=get;list;watch +//+kubebuilder:rbac:groups="apps",resources=deployments,verbs=get;list;watch;create;update;patch;delete +//+kubebuilder:rbac:groups="apps",resources=deployments/finalizers,verbs=update +//+kubebuilder:rbac:groups="apps",resources=daemonsets,verbs=get;list;watch;create;update;patch;delete +//+kubebuilder:rbac:groups="apps",resources=daemonsets/finalizers,verbs=update +//+kubebuilder:rbac:groups="storage.k8s.io",resources=csidrivers,verbs=get;list;watch;create;update;patch;delete +//+kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch;create;update;delete +//+kubebuilder:rbac:groups="",resources=configmaps/finalizers,verbs=update +//+kubebuilder:rbac:groups=security.openshift.io,resources=securitycontextconstraints,verbs=get;list;watch;create;patch;update;delete +//+kubebuilder:rbac:groups=monitoring.coreos.com,resources=prometheusrules,verbs=get;list;watch;create;update +//+kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch;create;update;patch;delete +//+kubebuilder:rbac:groups=console.openshift.io,resources=consoleplugins,verbs=* +//+kubebuilder:rbac:groups=operators.coreos.com,resources=subscriptions,verbs=get;list;watch;update +//+kubebuilder:rbac:groups=admissionregistration.k8s.io,resources=validatingwebhookconfigurations,verbs=get;list;update;create;watch;delete + +// For more details, check Reconcile and its Result here: +// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/reconcile +func (c *OperatorConfigMapReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { + c.ctx = ctx + c.log = log.FromContext(ctx, "OperatorConfigMap", req) + c.log.Info("Reconciling OperatorConfigMap") + + c.operatorConfigMap = &corev1.ConfigMap{} + c.operatorConfigMap.Name = req.Name + c.operatorConfigMap.Namespace = req.Namespace + if err := c.get(c.operatorConfigMap); err != nil { + if kerrors.IsNotFound(err) { + c.log.Info("Operator ConfigMap resource not found. Ignoring since object might be deleted.") + return reconcile.Result{}, nil + } + c.log.Error(err, "failed to get the operator's configMap") + return reconcile.Result{}, err + } + + if c.operatorConfigMap.GetDeletionTimestamp().IsZero() { + + //ensure finalizer + if controllerutil.AddFinalizer(c.operatorConfigMap, operatorConfigMapFinalizer) { + c.log.Info("finalizer missing on the operatorConfigMap resource, adding...") + if err := c.Client.Update(c.ctx, c.operatorConfigMap); err != nil { + return ctrl.Result{}, err + } + } + + if err := c.reconcileWebhookService(); err != nil { + c.log.Error(err, "unable to reconcile webhook service") + return ctrl.Result{}, err + } + + if err := c.reconcileSubscriptionValidatingWebhook(); err != nil { + c.log.Error(err, "unable to register subscription validating webhook") + return ctrl.Result{}, err + } + + if err := c.reconcileClientOperatorSubscriptionLabel(); err != nil { + c.log.Error(err, "unable to label ocs client operator subscription") + return ctrl.Result{}, err + } + + if err := c.reconcileSubscription(); err != nil { + c.log.Error(err, "unable to reconcile subscription") + return ctrl.Result{}, err + } + + if err := c.ensureConsolePlugin(); err != nil { + c.log.Error(err, "unable to deploy client console") + return ctrl.Result{}, err + } + + if deployCSI, err := c.getDeployCSIConfig(); err != nil { + c.log.Error(err, "failed to perform precheck for deploying CSI") + return ctrl.Result{}, err + } else if deployCSI { + clusterVersion := &configv1.ClusterVersion{} + clusterVersion.Name = clusterVersionName + if err := c.get(clusterVersion); err != nil { + c.log.Error(err, "failed to get the clusterVersion version of the OCP cluster") + return reconcile.Result{}, err + } + + if err := csi.InitializeSidecars(c.log, clusterVersion.Status.Desired.Version); err != nil { + c.log.Error(err, "unable to initialize sidecars") + return ctrl.Result{}, err + } + + c.scc = &secv1.SecurityContextConstraints{ + ObjectMeta: metav1.ObjectMeta{ + Name: csi.SCCName, + }, + } + err = c.createOrUpdate(c.scc, func() error { + // TODO: this is a hack to preserve the resourceVersion of the SCC + resourceVersion := c.scc.ResourceVersion + csi.SetSecurityContextConstraintsDesiredState(c.scc, c.OperatorNamespace) + c.scc.ResourceVersion = resourceVersion + return nil + }) + if err != nil { + c.log.Error(err, "unable to create/update SCC") + return ctrl.Result{}, err + } + + // create the monitor configmap for the csi drivers but never updates it. + // This is because the monitor configurations are added to the configmap + // when user creates storageclassclaims. + monConfigMap := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: templates.MonConfigMapName, + Namespace: c.OperatorNamespace, + }, + Data: map[string]string{ + "config.json": "[]", + }, + } + if err := c.own(monConfigMap); err != nil { + return ctrl.Result{}, err + } + + if err := c.create(monConfigMap); err != nil && !kerrors.IsAlreadyExists(err) { + c.log.Error(err, "failed to create monitor configmap", "name", monConfigMap.Name) + return ctrl.Result{}, err + } + + // create the encryption configmap for the csi driver but never updates it. + // This is because the encryption configuration are added to the configmap + // by the users before they create the encryption storageclassclaims. + encConfigMap := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: templates.EncryptionConfigMapName, + Namespace: c.OperatorNamespace, + }, + Data: map[string]string{ + "config.json": "[]", + }, + } + if err := c.own(encConfigMap); err != nil { + return ctrl.Result{}, err + } + + if err := c.create(encConfigMap); err != nil && !kerrors.IsAlreadyExists(err) { + c.log.Error(err, "failed to create monitor configmap", "name", encConfigMap.Name) + return ctrl.Result{}, err + } + + c.cephFSDeployment = &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: csi.CephFSDeploymentName, + Namespace: c.OperatorNamespace, + }, + } + err = c.createOrUpdate(c.cephFSDeployment, func() error { + if err := c.own(c.cephFSDeployment); err != nil { + return err + } + csi.SetCephFSDeploymentDesiredState(c.cephFSDeployment) + return nil + }) + if err != nil { + c.log.Error(err, "failed to create/update cephfs deployment") + return ctrl.Result{}, err + } + + c.cephFSDaemonSet = &appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: csi.CephFSDaemonSetName, + Namespace: c.OperatorNamespace, + }, + } + err = c.createOrUpdate(c.cephFSDaemonSet, func() error { + if err := c.own(c.cephFSDaemonSet); err != nil { + return err + } + csi.SetCephFSDaemonSetDesiredState(c.cephFSDaemonSet) + return nil + }) + if err != nil { + c.log.Error(err, "failed to create/update cephfs daemonset") + return ctrl.Result{}, err + } + + c.rbdDeployment = &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: csi.RBDDeploymentName, + Namespace: c.OperatorNamespace, + }, + } + err = c.createOrUpdate(c.rbdDeployment, func() error { + if err := c.own(c.rbdDeployment); err != nil { + return err + } + csi.SetRBDDeploymentDesiredState(c.rbdDeployment) + return nil + }) + if err != nil { + c.log.Error(err, "failed to create/update rbd deployment") + return ctrl.Result{}, err + } + + c.rbdDaemonSet = &appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: csi.RBDDaemonSetName, + Namespace: c.OperatorNamespace, + }, + } + err = c.createOrUpdate(c.rbdDaemonSet, func() error { + if err := c.own(c.rbdDaemonSet); err != nil { + return err + } + csi.SetRBDDaemonSetDesiredState(c.rbdDaemonSet) + return nil + }) + if err != nil { + c.log.Error(err, "failed to create/update rbd daemonset") + return ctrl.Result{}, err + } + + // Need to handle deletion of the csiDriver object, we cannot set + // ownerReference on it as its cluster scoped resource + cephfsCSIDriver := templates.CephFSCSIDriver.DeepCopy() + cephfsCSIDriver.ObjectMeta.Name = csi.GetCephFSDriverName() + if err := csi.CreateCSIDriver(c.ctx, c.Client, cephfsCSIDriver); err != nil { + c.log.Error(err, "unable to create cephfs CSIDriver") + return ctrl.Result{}, err + } + + rbdCSIDriver := templates.RbdCSIDriver.DeepCopy() + rbdCSIDriver.ObjectMeta.Name = csi.GetRBDDriverName() + if err := csi.CreateCSIDriver(c.ctx, c.Client, rbdCSIDriver); err != nil { + c.log.Error(err, "unable to create rbd CSIDriver") + return ctrl.Result{}, err + } + + prometheusRule := &monitoringv1.PrometheusRule{} + if err := k8sYAML.NewYAMLOrJSONDecoder(bytes.NewBufferString(string(pvcPrometheusRules)), 1000).Decode(prometheusRule); err != nil { + c.log.Error(err, "Unable to retrieve prometheus rules.", "prometheusRule", klog.KRef(prometheusRule.Namespace, prometheusRule.Name)) + return ctrl.Result{}, err + } + + prometheusRule.SetNamespace(c.OperatorNamespace) + + err = c.createOrUpdate(prometheusRule, func() error { + applyLabels(c.operatorConfigMap.Data["OCS_METRICS_LABELS"], &prometheusRule.ObjectMeta) + return c.own(prometheusRule) + }) + if err != nil { + c.log.Error(err, "failed to create/update prometheus rules") + return ctrl.Result{}, err + } + + c.log.Info("prometheus rules deployed", "prometheusRule", klog.KRef(prometheusRule.Namespace, prometheusRule.Name)) + } + } else { + // deletion phase + if err := c.deletionPhase(); err != nil { + return ctrl.Result{}, err + } + + //remove finalizer + if controllerutil.RemoveFinalizer(c.operatorConfigMap, operatorConfigMapFinalizer) { + if err := c.Client.Update(c.ctx, c.operatorConfigMap); err != nil { + return ctrl.Result{}, err + } + c.log.Info("finallizer removed successfully") + } + } + return ctrl.Result{}, nil +} + +func (c *OperatorConfigMapReconciler) deletionPhase() error { + claimsList := &v1alpha1.StorageClaimList{} + if err := c.list(claimsList, client.Limit(1)); err != nil { + c.log.Error(err, "unable to verify StorageClaims presence prior to removal of CSI resources") + return err + } else if len(claimsList.Items) != 0 { + err = fmt.Errorf("failed to clean up resources: storage claims are present on the cluster") + c.log.Error(err, "Waiting for all storageClaims to be deleted.") + return err + } + if err := csi.DeleteCSIDriver(c.ctx, c.Client, csi.GetCephFSDriverName()); err != nil && !kerrors.IsNotFound(err) { + c.log.Error(err, "unable to delete cephfs CSIDriver") + return err + } + if err := csi.DeleteCSIDriver(c.ctx, c.Client, csi.GetRBDDriverName()); err != nil && !kerrors.IsNotFound(err) { + c.log.Error(err, "unable to delete rbd CSIDriver") + return err + } + + c.scc = &secv1.SecurityContextConstraints{} + c.scc.Name = csi.SCCName + if err := c.delete(c.scc); err != nil { + c.log.Error(err, "unable to delete SCC") + return err + } + + whConfig := &admrv1.ValidatingWebhookConfiguration{} + whConfig.Name = templates.SubscriptionWebhookName + if err := c.delete(whConfig); err != nil { + c.log.Error(err, "failed to delete subscription webhook") + return err + } + + return nil +} + +func (c *OperatorConfigMapReconciler) createOrUpdate(obj client.Object, f controllerutil.MutateFn) error { + result, err := controllerutil.CreateOrUpdate(c.ctx, c.Client, obj, f) + if err != nil { + return err + } + c.log.Info("successfully created or updated", "operation", result, "name", obj.GetName()) + return nil +} + +func (c *OperatorConfigMapReconciler) own(obj client.Object) error { + return controllerutil.SetControllerReference(c.operatorConfigMap, obj, c.Client.Scheme()) +} + +func (c *OperatorConfigMapReconciler) create(obj client.Object) error { + return c.Client.Create(c.ctx, obj) +} + +// applyLabels adds labels to object meta, overwriting keys that are already defined. +func applyLabels(label string, t *metav1.ObjectMeta) { + // Create a map to store the configuration + promLabel := make(map[string]string) + + labels := strings.Split(label, "\n") + // Loop through the lines and extract key-value pairs + for _, line := range labels { + if len(line) == 0 { + continue + } + parts := strings.SplitN(line, ":", 2) + key := strings.TrimSpace(parts[0]) + value := strings.TrimSpace(parts[1]) + promLabel[key] = value + } + + t.Labels = promLabel +} + +func (c *OperatorConfigMapReconciler) ensureConsolePlugin() error { + c.consoleDeployment = &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: console.DeploymentName, + Namespace: c.OperatorNamespace, + }, + } + + err := c.get(c.consoleDeployment) + if err != nil { + c.log.Error(err, "failed to get the deployment for the console") + return err + } + + nginxConf := console.GetNginxConf() + nginxConfigMap := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: console.NginxConfigMapName, + Namespace: c.OperatorNamespace, + }, + Data: map[string]string{ + "nginx.conf": nginxConf, + }, + } + err = c.createOrUpdate(nginxConfigMap, func() error { + if consoleConfigMapData := nginxConfigMap.Data["nginx.conf"]; consoleConfigMapData != nginxConf { + nginxConfigMap.Data["nginx.conf"] = nginxConf + } + return controllerutil.SetControllerReference(c.consoleDeployment, nginxConfigMap, c.Scheme) + }) + + if err != nil { + c.log.Error(err, "failed to create nginx config map") + return err + } + + consoleService := console.GetService(c.ConsolePort, c.OperatorNamespace) + + err = c.createOrUpdate(consoleService, func() error { + if err := controllerutil.SetControllerReference(c.consoleDeployment, consoleService, c.Scheme); err != nil { + return err + } + console.GetService(c.ConsolePort, c.OperatorNamespace).DeepCopyInto(consoleService) + return nil + }) + + if err != nil { + c.log.Error(err, "failed to create/update service for console") + return err + } + + consolePlugin := console.GetConsolePlugin(c.ConsolePort, c.OperatorNamespace) + err = c.createOrUpdate(consolePlugin, func() error { + // preserve the resourceVersion of the consolePlugin + resourceVersion := consolePlugin.ResourceVersion + console.GetConsolePlugin(c.ConsolePort, c.OperatorNamespace).DeepCopyInto(consolePlugin) + consolePlugin.ResourceVersion = resourceVersion + return nil + }) + + if err != nil { + c.log.Error(err, "failed to create/update consoleplugin") + return err + } + + return nil +} + +func (c *OperatorConfigMapReconciler) getDeployCSIConfig() (bool, error) { + data := c.operatorConfigMap.Data + if data == nil { + data = map[string]string{} + } + + var deployCSI bool + var err error + if value, ok := data[deployCSIKey]; ok { + deployCSI, err = strconv.ParseBool(value) + if err != nil { + return false, fmt.Errorf("failed to parse value for %q in operator configmap as a boolean: %v", deployCSIKey, err) + } + } else { + // CSI installation is not specified explicitly in the configmap and + // behaviour is different in case we recognize the StorageCluster API on the cluster. + storageClusterCRD := &metav1.PartialObjectMetadata{} + storageClusterCRD.SetGroupVersionKind( + extv1.SchemeGroupVersion.WithKind("CustomResourceDefinition"), + ) + storageClusterCRD.Name = "storageclusters.ocs.openshift.io" + if err = c.get(storageClusterCRD); err != nil { + if !kerrors.IsNotFound(err) { + return false, fmt.Errorf("failed to verify existence of storagecluster crd: %v", err) + } + // storagecluster CRD doesn't exist + deployCSI = true + } else { + // storagecluster CRD exists and don't deploy CSI until explicitly mentioned in the configmap + deployCSI = false + } + } + + return deployCSI, nil +} + +func (c *OperatorConfigMapReconciler) get(obj client.Object, opts ...client.GetOption) error { + return c.Get(c.ctx, client.ObjectKeyFromObject(obj), obj, opts...) +} + +func (c *OperatorConfigMapReconciler) reconcileSubscriptionValidatingWebhook() error { + whConfig := &admrv1.ValidatingWebhookConfiguration{} + whConfig.Name = templates.SubscriptionWebhookName + + // TODO (lgangava): after change to configmap controller, need to remove webhook during deletion + err := c.createOrUpdate(whConfig, func() error { + + // openshift fills in the ca on finding this annotation + whConfig.Annotations = map[string]string{ + "service.beta.openshift.io/inject-cabundle": "true", + } + + var caBundle []byte + if len(whConfig.Webhooks) == 0 { + whConfig.Webhooks = make([]admrv1.ValidatingWebhook, 1) + } else { + // do not mutate CA bundle that was injected by openshift + caBundle = whConfig.Webhooks[0].ClientConfig.CABundle + } + + // webhook desired state + var wh *admrv1.ValidatingWebhook = &whConfig.Webhooks[0] + templates.SubscriptionValidatingWebhook.DeepCopyInto(wh) + + wh.Name = whConfig.Name + // only send requests received from own namespace + wh.NamespaceSelector = &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "kubernetes.io/metadata.name": c.OperatorNamespace, + }, + } + // only send resources matching the label + wh.ObjectSelector = &metav1.LabelSelector{ + MatchLabels: map[string]string{ + subscriptionLabelKey: subscriptionLabelValue, + }, + } + // preserve the existing (injected) CA bundle if any + wh.ClientConfig.CABundle = caBundle + // send request to the service running in own namespace + wh.ClientConfig.Service.Namespace = c.OperatorNamespace + + return nil + }) + + if err != nil { + return err + } + + c.log.Info("successfully registered validating webhook") + return nil +} + +func (c *OperatorConfigMapReconciler) reconcileClientOperatorSubscriptionLabel() error { + subscriptionList := &opv1a1.SubscriptionList{} + err := c.List(c.ctx, subscriptionList, client.InNamespace(c.OperatorNamespace)) + if err != nil { + return fmt.Errorf("failed to list subscriptions") + } + + sub := utils.Find(subscriptionList.Items, func(sub *opv1a1.Subscription) bool { + return sub.Spec.Package == "ocs-client-operator" + }) + + if sub == nil { + return fmt.Errorf("failed to find subscription with ocs-client-operator package") + } + + if utils.AddLabel(sub, subscriptionLabelKey, subscriptionLabelValue) { + if err := c.Update(c.ctx, sub); err != nil { + return err + } + } + + c.log.Info("successfully labelled ocs-client-operator subscription") + return nil +} + +func (c *OperatorConfigMapReconciler) reconcileSubscription() error { + + storageClients := &v1alpha1.StorageClientList{} + if err := c.list(storageClients); err != nil { + return fmt.Errorf("failed to list storageclients: %v", err) + } + + var desiredChannel string + for idx := range storageClients.Items { + // empty if annotation doesn't exist or else gets desired channel + channel := storageClients. + Items[idx]. + GetAnnotations()[utils.DesiredSubscriptionChannelAnnotationKey] + // skip clients with no/empty desired channel annotation + if channel != "" { + // check if we already established a desired channel + if desiredChannel == "" { + desiredChannel = channel + } + // check for agreement between clients + if channel != desiredChannel { + desiredChannel = "" + // two clients didn't agree for a same channel and no need to continue further + break + } + } + } + + if desiredChannel != "" { + subscriptions := &opv1a1.SubscriptionList{} + err := c.list( + subscriptions, + client.InNamespace(c.OperatorNamespace), + client.MatchingLabels{subscriptionLabelKey: subscriptionLabelValue}, + client.Limit(1), + ) + if err != nil { + return fmt.Errorf("failed to list subscription for ocs-client-operator using labels: %v", err) + } + + if len(subscriptions.Items) == 1 { + clientSubscription := &subscriptions.Items[0] + if desiredChannel != clientSubscription.Spec.Channel { + clientSubscription.Spec.Channel = desiredChannel + // TODO: https://github.com/red-hat-storage/ocs-client-operator/issues/130 + // there can be a possibility that platform is behind, even then updating the channel will only make subscription to be in upgrading state + // without any side effects for already running workloads. However, this will be a silent failure and need to be fixed via above TODO issue. + if err := c.update(clientSubscription); err != nil { + return fmt.Errorf("failed to update subscription channel to %v: %v", desiredChannel, err) + } + } + } + } + + return nil +} + +func (c *OperatorConfigMapReconciler) reconcileWebhookService() error { + svc := &corev1.Service{} + svc.Name = templates.WebhookServiceName + svc.Namespace = c.OperatorNamespace + err := c.createOrUpdate(svc, func() error { + if err := c.own(svc); err != nil { + return err + } + utils.AddAnnotation(svc, "service.beta.openshift.io/serving-cert-secret-name", "webhook-cert-secret") + templates.WebhookService.Spec.DeepCopyInto(&svc.Spec) + return nil + }) + if err != nil { + return err + } + c.log.Info("successfully reconcile webhook service") + return nil +} + +func (c *OperatorConfigMapReconciler) list(obj client.ObjectList, opts ...client.ListOption) error { + return c.List(c.ctx, obj, opts...) +} + +func (c *OperatorConfigMapReconciler) update(obj client.Object, opts ...client.UpdateOption) error { + return c.Update(c.ctx, obj, opts...) +} + +func (c *OperatorConfigMapReconciler) delete(obj client.Object, opts ...client.DeleteOption) error { + if err := c.Delete(c.ctx, obj, opts...); err != nil && !kerrors.IsNotFound(err) { + return err + } + return nil +} diff --git a/controllers/storageclaim_controller.go b/controllers/storageclaim_controller.go index e8460101..76346023 100644 --- a/controllers/storageclaim_controller.go +++ b/controllers/storageclaim_controller.go @@ -93,6 +93,7 @@ func (r *StorageClaimReconciler) SetupWithManager(mgr ctrl.Manager) error { vsc := o.(*snapapi.VolumeSnapshotContent) if vsc != nil && slices.Contains(csiDrivers, vsc.Spec.Driver) && + vsc.Status != nil && vsc.Status.SnapshotHandle != nil { parts := strings.Split(*vsc.Status.SnapshotHandle, "-") if len(parts) == 9 { @@ -152,7 +153,7 @@ func (r *StorageClaimReconciler) Reconcile(ctx context.Context, req ctrl.Request r.storageClaimHash = getMD5Hash(r.storageClaim.Name) r.storageClaim.Status.Phase = v1alpha1.StorageClaimInitializing - if r.storageClaim.Spec.StorageClient == nil { + if r.storageClaim.Spec.StorageClient == "" { storageClientList := &v1alpha1.StorageClientList{} if err := r.list(storageClientList); err != nil { return reconcile.Result{}, err @@ -170,8 +171,7 @@ func (r *StorageClaimReconciler) Reconcile(ctx context.Context, req ctrl.Request } else { // Fetch the StorageClient instance r.storageClient = &v1alpha1.StorageClient{} - r.storageClient.Name = r.storageClaim.Spec.StorageClient.Name - r.storageClient.Namespace = r.storageClaim.Spec.StorageClient.Namespace + r.storageClient.Name = r.storageClaim.Spec.StorageClient if err := r.get(r.storageClient); err != nil { r.log.Error(err, "Failed to get StorageClient.") return reconcile.Result{}, err @@ -246,13 +246,14 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { Name: r.storageClaim.Name, }, } + var claimType string if err = r.get(existing); err == nil { - sccType := r.storageClaim.Spec.Type + claimType = strings.ToLower(r.storageClaim.Spec.Type) sccEncryptionMethod := r.storageClaim.Spec.EncryptionMethod _, scIsFSType := existing.Parameters["fsName"] scEncryptionMethod, scHasEncryptionMethod := existing.Parameters["encryptionMethod"] - if !((sccType == "sharedfile" && scIsFSType && !scHasEncryptionMethod) || - (sccType == "block" && !scIsFSType && sccEncryptionMethod == scEncryptionMethod)) { + if !((claimType == "sharedfile" && scIsFSType && !scHasEncryptionMethod) || + (claimType == "block" && !scIsFSType && sccEncryptionMethod == scEncryptionMethod)) { r.log.Error(fmt.Errorf("storageClaim is not compatible with existing StorageClass"), "StorageClaim validation failed.") r.storageClaim.Status.Phase = v1alpha1.StorageClaimFailed @@ -265,18 +266,8 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { // Configuration phase. r.storageClaim.Status.Phase = v1alpha1.StorageClaimConfiguring - updateStorageClaim := false // Check if finalizers are present, if not, add them. - if !contains(r.storageClaim.GetFinalizers(), storageClaimFinalizer) { - r.log.Info("Finalizer not found for StorageClaim. Adding finalizer.", "StorageClaim", r.storageClaim.Name) - r.storageClaim.SetFinalizers(append(r.storageClaim.GetFinalizers(), storageClaimFinalizer)) - updateStorageClaim = true - } - if utils.AddAnnotation(r.storageClaim, storageClientAnnotationKey, client.ObjectKeyFromObject(r.storageClient).String()) { - updateStorageClaim = true - } - - if updateStorageClaim { + if controllerutil.AddFinalizer(r.storageClaim, storageClaimFinalizer) { if err := r.update(r.storageClaim); err != nil { return reconcile.Result{}, fmt.Errorf("failed to update StorageClaim %q: %v", r.storageClaim.Name, err) } @@ -284,17 +275,17 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { // storageClaimStorageType is the storage type of the StorageClaim var storageClaimStorageType providerclient.StorageType - switch r.storageClaim.Spec.Type { + switch claimType { case "block": - storageClaimStorageType = providerclient.StorageTypeBlockpool + storageClaimStorageType = providerclient.StorageTypeBlock case "sharedfile": - storageClaimStorageType = providerclient.StorageTypeSharedfilesystem + storageClaimStorageType = providerclient.StorageTypeSharedFile default: - return reconcile.Result{}, fmt.Errorf("unsupported storage type: %s", r.storageClaim.Spec.Type) + return reconcile.Result{}, fmt.Errorf("unsupported storage type: %s", claimType) } - // Call the `FulfillStorageClassClaim` service on the provider server with StorageClaim as a request message. - _, err = providerClient.FulfillStorageClassClaim( + // Call the `FulfillStorageClaim` service on the provider server with StorageClaim as a request message. + _, err = providerClient.FulfillStorageClaim( r.ctx, r.storageClient.Status.ConsumerID, r.storageClaim.Name, @@ -306,14 +297,14 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { return reconcile.Result{}, fmt.Errorf("failed to initiate fulfillment of StorageClaim: %v", err) } - // Call the `GetStorageClassClaimConfig` service on the provider server with StorageClaim as a request message. - response, err := providerClient.GetStorageClassClaimConfig( + // Call the `GetStorageClaimConfig` service on the provider server with StorageClaim as a request message. + response, err := providerClient.GetStorageClaimConfig( r.ctx, r.storageClient.Status.ConsumerID, r.storageClaim.Name, ) if err != nil { - return reconcile.Result{}, fmt.Errorf("failed to get StorageClassClaim config: %v", err) + return reconcile.Result{}, fmt.Errorf("failed to get StorageClaim config: %v", err) } resources := response.ExternalResource if resources == nil { @@ -339,7 +330,7 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { data := map[string]string{} err = json.Unmarshal(resource.Data, &data) if err != nil { - return reconcile.Result{}, fmt.Errorf("failed to unmarshal StorageClassClaim configuration response: %v", err) + return reconcile.Result{}, fmt.Errorf("failed to unmarshal StorageClaim configuration response: %v", err) } // Create the received resources, if necessary. @@ -347,7 +338,7 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { case "Secret": secret := &corev1.Secret{} secret.Name = resource.Name - secret.Namespace = r.storageClient.Namespace + secret.Namespace = r.OperatorNamespace _, err = controllerutil.CreateOrUpdate(r.ctx, r.Client, secret, func() error { // cluster scoped resource owning namespace scoped resource which allows garbage collection if err := r.own(secret); err != nil { @@ -382,9 +373,9 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { // same name. csiClusterConfigEntry.ClusterID = r.storageClaimHash var storageClass *storagev1.StorageClass - data["csi.storage.k8s.io/provisioner-secret-namespace"] = r.storageClient.Namespace - data["csi.storage.k8s.io/node-stage-secret-namespace"] = r.storageClient.Namespace - data["csi.storage.k8s.io/controller-expand-secret-namespace"] = r.storageClient.Namespace + data["csi.storage.k8s.io/provisioner-secret-namespace"] = r.OperatorNamespace + data["csi.storage.k8s.io/node-stage-secret-namespace"] = r.OperatorNamespace + data["csi.storage.k8s.io/controller-expand-secret-namespace"] = r.OperatorNamespace data["clusterID"] = r.storageClaimHash if resource.Name == "cephfs" { @@ -403,7 +394,7 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { } case "VolumeSnapshotClass": var volumeSnapshotClass *snapapi.VolumeSnapshotClass - data["csi.storage.k8s.io/snapshotter-secret-namespace"] = r.storageClient.Namespace + data["csi.storage.k8s.io/snapshotter-secret-namespace"] = r.OperatorNamespace // generate a new clusterID for cephfs subvolumegroup, as // storageclaim is clusterscoped resources using its // hash as the clusterID @@ -453,20 +444,19 @@ func (r *StorageClaimReconciler) reconcilePhases() (reconcile.Result, error) { return reconcile.Result{}, fmt.Errorf("failed to update mon configmap: %v", err) } - // Call `RevokeStorageClassClaim` service on the provider server with StorageClaim as a request message. + // Call `RevokeStorageClaim` service on the provider server with StorageClaim as a request message. // Check if StorageClaim is still exists (it might have been manually removed during the StorageClass // removal above). - _, err = providerClient.RevokeStorageClassClaim( + _, err = providerClient.RevokeStorageClaim( r.ctx, r.storageClient.Status.ConsumerID, r.storageClaim.Name, ) if err != nil { - return reconcile.Result{}, fmt.Errorf("failed to revoke StorageClassClaim: %s", err) + return reconcile.Result{}, fmt.Errorf("failed to revoke StorageClaim: %s", err) } - if contains(r.storageClaim.GetFinalizers(), storageClaimFinalizer) { - r.storageClaim.Finalizers = remove(r.storageClaim.Finalizers, storageClaimFinalizer) + if controllerutil.RemoveFinalizer(r.storageClaim, storageClaimFinalizer) { if err := r.update(r.storageClaim); err != nil { return ctrl.Result{}, fmt.Errorf("failed to remove finalizer from storageClaim: %s", err) } @@ -595,8 +585,7 @@ func (r *StorageClaimReconciler) delete(obj client.Object) error { } func (r *StorageClaimReconciler) own(resource metav1.Object) error { - // Ensure StorageClaim ownership on a resource - return controllerutil.SetOwnerReference(r.storageClaim, resource, r.Scheme) + return controllerutil.SetControllerReference(r.storageClaim, resource, r.Scheme) } func (r *StorageClaimReconciler) createOrReplaceVolumeSnapshotClass(volumeSnapshotClass *snapapi.VolumeSnapshotClass) error { diff --git a/controllers/storageclassclaim_migration_controller.go b/controllers/storageclassclaim_migration_controller.go deleted file mode 100644 index 35a02c33..00000000 --- a/controllers/storageclassclaim_migration_controller.go +++ /dev/null @@ -1,135 +0,0 @@ -package controllers - -import ( - "context" - "fmt" - - "github.com/go-logr/logr" - "github.com/red-hat-storage/ocs-client-operator/api/v1alpha1" - corev1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/runtime" - ctrl "sigs.k8s.io/controller-runtime" - "sigs.k8s.io/controller-runtime/pkg/builder" - "sigs.k8s.io/controller-runtime/pkg/client" - "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" - "sigs.k8s.io/controller-runtime/pkg/event" - "sigs.k8s.io/controller-runtime/pkg/log" - "sigs.k8s.io/controller-runtime/pkg/predicate" -) - -const ( - // for migration of storageclassclaims to storageclaims - storageClassClaimFinalizer = "storageclassclaim.ocs.openshift.io" -) - -// StorageClassClaimReconcile migrates StorageClassClaim objects to StorageClaim -type StorageClassClaimMigrationReconciler struct { - client.Client - Scheme *runtime.Scheme - - log logr.Logger - ctx context.Context -} - -func (r *StorageClassClaimMigrationReconciler) SetupWithManager(mgr ctrl.Manager) error { - onlyCreateEvent := predicate.Funcs{ - CreateFunc: func(event.CreateEvent) bool { - return true - }, - DeleteFunc: func(event.DeleteEvent) bool { - return false - }, - UpdateFunc: func(event.UpdateEvent) bool { - return false - }, - GenericFunc: func(event.GenericEvent) bool { - return false - }, - } - return ctrl.NewControllerManagedBy(mgr). - For(&v1alpha1.StorageClassClaim{}, builder.WithPredicates(onlyCreateEvent)). - Complete(r) -} - -//+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclassclaims,verbs=get;list;watch;create;update;patch;delete -//+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclassclaims/status,verbs=get;update;patch -//+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclassclaims/finalizers,verbs=update - -func (r *StorageClassClaimMigrationReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { - - r.log = log.FromContext(ctx) - r.ctx = ctx - r.log.Info("Starting reconcile.") - - storageClassClaim := &v1alpha1.StorageClassClaim{} - storageClassClaim.Name = req.Name - - if err := r.get(storageClassClaim); err != nil && !kerrors.IsNotFound(err) { - r.log.Error(err, fmt.Sprintf("failed to get storageclassclaim %q", storageClassClaim.Name)) - return ctrl.Result{}, err - } - - storageClaim := &v1alpha1.StorageClaim{} - storageClaim.Name = storageClassClaim.Name - - switch storageClassClaim.Spec.Type { - case "blockpool": - storageClaim.Spec.Type = "block" - case "sharedfilesystem": - storageClaim.Spec.Type = "sharedfile" - } - - storageClaim.Spec.EncryptionMethod = storageClassClaim.Spec.EncryptionMethod - storageClaim.Spec.StorageProfile = storageClassClaim.Spec.StorageProfile - storageClaim.Spec.StorageClient = storageClassClaim.Spec.StorageClient.DeepCopy() - - r.log.Info(fmt.Sprintf("Migrating storageclassclaim %q", storageClassClaim.Name)) - if err := r.create(storageClaim); err != nil && !kerrors.IsAlreadyExists(err) { - return ctrl.Result{}, fmt.Errorf("failed to create storageclaims %q: %v", storageClaim.Name, err) - } - - for idx := range storageClassClaim.Status.SecretNames { - secret := &corev1.Secret{} - secret.Name = storageClassClaim.Status.SecretNames[idx] - secret.Namespace = storageClassClaim.Spec.StorageClient.Namespace - if err := r.delete(secret); err != nil { - return ctrl.Result{}, fmt.Errorf("failed to delete secret %s: %v", client.ObjectKeyFromObject(secret), err) - } - } - - // remove finalizer on existing storageclassclaim - finalizerUpdated := controllerutil.RemoveFinalizer(storageClassClaim, storageClassClaimFinalizer) - if finalizerUpdated { - if err := r.update(storageClassClaim); err != nil { - return ctrl.Result{}, fmt.Errorf("failed to remove finalizer on storageclassclaim %q: %v", storageClassClaim.Name, err) - } - } - - // migration is successful delete the storageclassclaim - if err := r.delete(storageClassClaim); err != nil { - return ctrl.Result{}, fmt.Errorf("failed to delete storageclassclaim %q: %v", storageClassClaim.Name, err) - } - - r.log.Info(fmt.Sprintf("Successfully migrated storageclassclaim %q to storageclass %q", storageClassClaim.Name, storageClaim.Name)) - return ctrl.Result{}, nil -} - -func (r *StorageClassClaimMigrationReconciler) get(obj client.Object, opts ...client.GetOption) error { - return r.Get(r.ctx, client.ObjectKeyFromObject(obj), obj, opts...) -} - -func (r *StorageClassClaimMigrationReconciler) create(obj client.Object, opts ...client.CreateOption) error { - return r.Create(r.ctx, obj, opts...) -} - -func (r *StorageClassClaimMigrationReconciler) update(obj client.Object, opts ...client.UpdateOption) error { - return r.Update(r.ctx, obj, opts...) -} - -func (r *StorageClassClaimMigrationReconciler) delete(obj client.Object, opts ...client.DeleteOption) error { - if err := r.Delete(r.ctx, obj, opts...); err != nil && !kerrors.IsNotFound(err) { - return err - } - return nil -} diff --git a/controllers/storageclient_controller.go b/controllers/storageclient_controller.go index a36c8fbb..bf93589b 100644 --- a/controllers/storageclient_controller.go +++ b/controllers/storageclient_controller.go @@ -18,9 +18,6 @@ package controllers import ( "context" - "crypto/md5" - "encoding/hex" - "encoding/json" "fmt" "os" "strings" @@ -37,13 +34,12 @@ import ( batchv1 "k8s.io/api/batch/v1" corev1 "k8s.io/api/core/v1" kerrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/types" "k8s.io/klog/v2" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" - "sigs.k8s.io/controller-runtime/pkg/handler" "sigs.k8s.io/controller-runtime/pkg/log" "sigs.k8s.io/controller-runtime/pkg/reconcile" ) @@ -55,19 +51,13 @@ const ( GetStorageConfig = "GetStorageConfig" AcknowledgeOnboarding = "AcknowledgeOnboarding" - storageClientAnnotationKey = "ocs.openshift.io/storageclient" - storageClientNameLabel = "ocs.openshift.io/storageclient.name" - storageClientNamespaceLabel = "ocs.openshift.io/storageclient.namespace" - storageClientFinalizer = "storageclient.ocs.openshift.io" - defaultClaimsOwnerAnnotationKey = "ocs.openshift.io/storageclaim.owner" - defaultClaimsProcessedAnnotationKey = "ocs.openshift.io/storageclaim.processed" - defaultBlockStorageClaim = "ocs-storagecluster-ceph-rbd" - defaultSharedfileStorageClaim = "ocs-storagecluster-cephfs" + storageClientNameLabel = "ocs.openshift.io/storageclient.name" + storageClientFinalizer = "storageclient.ocs.openshift.io" + storageClaimProcessedAnnotationKey = "ocs.openshift.io/storageclaim.processed" + storageClientDefaultAnnotationKey = "ocs.openshift.io/storageclient.default" // indexes for caching - storageProviderEndpointIndexName = "index:storageProviderEndpoint" - storageClientAnnotationIndexName = "index:storageClientAnnotation" - defaultClaimsOwnerIndexName = "index:defaultClaimsOwner" + ownerIndexName = "index:ownerUID" csvPrefix = "ocs-client-operator" ) @@ -76,58 +66,34 @@ const ( type StorageClientReconciler struct { ctx context.Context client.Client - Log klog.Logger - Scheme *runtime.Scheme - recorder *utils.EventReporter + Log klog.Logger + Scheme *runtime.Scheme + recorder *utils.EventReporter + storageClient *v1alpha1.StorageClient OperatorNamespace string } // SetupWithManager sets up the controller with the Manager. -func (s *StorageClientReconciler) SetupWithManager(mgr ctrl.Manager) error { +func (r *StorageClientReconciler) SetupWithManager(mgr ctrl.Manager) error { ctx := context.Background() - // Index should be registered before cache start. - // IndexField is used to filter out the objects that already exists with - // status.phase != failed This will help in blocking - // the new storageclient creation if there is already with one with same - // provider endpoint with status.phase != failed - _ = mgr.GetCache().IndexField(ctx, &v1alpha1.StorageClient{}, storageProviderEndpointIndexName, func(o client.Object) []string { - res := []string{} - if o.(*v1alpha1.StorageClient).Status.Phase != v1alpha1.StorageClientFailed { - res = append(res, o.(*v1alpha1.StorageClient).Spec.StorageProviderEndpoint) + if err := mgr.GetCache().IndexField(ctx, &v1alpha1.StorageClaim{}, ownerIndexName, func(obj client.Object) []string { + refs := obj.GetOwnerReferences() + var owners []string + for i := range refs { + owners = append(owners, string(refs[i].UID)) } - return res - }) - - if err := mgr.GetCache().IndexField(ctx, &v1alpha1.StorageClaim{}, storageClientAnnotationIndexName, func(obj client.Object) []string { - return []string{obj.GetAnnotations()[storageClientAnnotationKey]} - }); err != nil { - return fmt.Errorf("unable to set up FieldIndexer for storageclient annotation: %v", err) - } - - if err := mgr.GetCache().IndexField(ctx, &v1alpha1.StorageClient{}, defaultClaimsOwnerIndexName, func(obj client.Object) []string { - return []string{obj.GetAnnotations()[defaultClaimsOwnerAnnotationKey]} + return owners }); err != nil { - return fmt.Errorf("unable to set up FieldIndexer for storageclient owner annotation: %v", err) + return fmt.Errorf("unable to set up FieldIndexer for StorageClaim's owner uid: %v", err) } - enqueueStorageClientRequest := handler.EnqueueRequestsFromMapFunc( - func(_ context.Context, obj client.Object) []reconcile.Request { - annotations := obj.GetAnnotations() - if _, found := annotations[storageClaimAnnotation]; found { - return []reconcile.Request{{ - NamespacedName: types.NamespacedName{ - Name: obj.GetName(), - }, - }} - } - return []reconcile.Request{} - }) - s.recorder = utils.NewEventReporter(mgr.GetEventRecorderFor("controller_storageclient")) + r.recorder = utils.NewEventReporter(mgr.GetEventRecorderFor("controller_storageclient")) return ctrl.NewControllerManagedBy(mgr). For(&v1alpha1.StorageClient{}). - Watches(&v1alpha1.StorageClaim{}, enqueueStorageClientRequest). - Complete(s) + Owns(&v1alpha1.StorageClaim{}). + Owns(&batchv1.CronJob{}). + Complete(r) } //+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclients,verbs=get;list;watch;create;update;patch;delete @@ -137,37 +103,34 @@ func (s *StorageClientReconciler) SetupWithManager(mgr ctrl.Manager) error { //+kubebuilder:rbac:groups=batch,resources=cronjobs,verbs=get;list;create;update;watch;delete //+kubebuilder:rbac:groups=operators.coreos.com,resources=clusterserviceversions,verbs=get;list;watch -func (s *StorageClientReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { +func (r *StorageClientReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { var err error - s.ctx = ctx - s.Log = log.FromContext(ctx, "StorageClient", req) - s.Log.Info("Reconciling StorageClient") + r.ctx = ctx + r.Log = log.FromContext(ctx, "StorageClient", req) + r.Log.Info("Reconciling StorageClient") - // Fetch the StorageClient instance - instance := &v1alpha1.StorageClient{} - instance.Name = req.Name - instance.Namespace = req.Namespace - - if err = s.Client.Get(ctx, types.NamespacedName{Name: instance.Name, Namespace: instance.Namespace}, instance); err != nil { + r.storageClient = &v1alpha1.StorageClient{} + r.storageClient.Name = req.Name + if err = r.get(r.storageClient); err != nil { if kerrors.IsNotFound(err) { - s.Log.Info("StorageClient resource not found. Ignoring since object must be deleted.") + r.Log.Info("StorageClient resource not found. Ignoring since object must be deleted.") return reconcile.Result{}, nil } - s.Log.Error(err, "Failed to get StorageClient.") + r.Log.Error(err, "Failed to get StorageClient.") return reconcile.Result{}, fmt.Errorf("failed to get StorageClient: %v", err) } // Dont Reconcile the StorageClient if it is in failed state - if instance.Status.Phase == v1alpha1.StorageClientFailed { + if r.storageClient.Status.Phase == v1alpha1.StorageClientFailed { return reconcile.Result{}, nil } - result, reconcileErr := s.reconcilePhases(instance) + result, reconcileErr := r.reconcilePhases() // Apply status changes to the StorageClient - statusErr := s.Client.Status().Update(ctx, instance) + statusErr := r.Client.Status().Update(ctx, r.storageClient) if statusErr != nil { - s.Log.Error(statusErr, "Failed to update StorageClient status.") + r.Log.Error(statusErr, "Failed to update StorageClient status.") } if reconcileErr != nil { err = reconcileErr @@ -177,110 +140,107 @@ func (s *StorageClientReconciler) Reconcile(ctx context.Context, req ctrl.Reques return result, err } -func (s *StorageClientReconciler) reconcilePhases(instance *v1alpha1.StorageClient) (ctrl.Result, error) { - storageClientListOption := []client.ListOption{ - client.MatchingFields{storageProviderEndpointIndexName: instance.Spec.StorageProviderEndpoint}, - } - - storageClientList := &v1alpha1.StorageClientList{} - if err := s.Client.List(s.ctx, storageClientList, storageClientListOption...); err != nil { - s.Log.Error(err, "unable to list storage clients") - return ctrl.Result{}, err - } +func (r *StorageClientReconciler) reconcilePhases() (ctrl.Result, error) { - if len(storageClientList.Items) > 1 { - s.Log.Info("one StorageClient is allowed per namespace but found more than one. Rejecting new request.") - instance.Status.Phase = v1alpha1.StorageClientFailed - // Dont Reconcile again as there is already a StorageClient with same provider endpoint - return reconcile.Result{}, nil - } - - externalClusterClient, err := s.newExternalClusterClient(instance) + externalClusterClient, err := r.newExternalClusterClient() if err != nil { return reconcile.Result{}, err } defer externalClusterClient.Close() // deletion phase - if !instance.GetDeletionTimestamp().IsZero() { - return s.deletionPhase(instance, externalClusterClient) + if !r.storageClient.GetDeletionTimestamp().IsZero() { + return r.deletionPhase(externalClusterClient) + } + + updateStorageClient := false + storageClients := &v1alpha1.StorageClientList{} + if err := r.list(storageClients); err != nil { + r.Log.Error(err, "unable to list storage clients") + return ctrl.Result{}, err + } + if len(storageClients.Items) == 1 && storageClients.Items[0].Name == r.storageClient.Name { + if utils.AddAnnotation(r.storageClient, storageClientDefaultAnnotationKey, "true") { + updateStorageClient = true + } } // ensure finalizer - if !contains(instance.GetFinalizers(), storageClientFinalizer) { - instance.Status.Phase = v1alpha1.StorageClientInitializing - s.Log.Info("Finalizer not found for StorageClient. Adding finalizer.", "StorageClient", klog.KRef(instance.Namespace, instance.Name)) - instance.ObjectMeta.Finalizers = append(instance.ObjectMeta.Finalizers, storageClientFinalizer) - if err := s.Client.Update(s.ctx, instance); err != nil { - s.Log.Info("Failed to update StorageClient with finalizer.", "StorageClient", klog.KRef(instance.Namespace, instance.Name)) - return reconcile.Result{}, fmt.Errorf("failed to update StorageClient with finalizer: %v", err) + if controllerutil.AddFinalizer(r.storageClient, storageClientFinalizer) { + r.storageClient.Status.Phase = v1alpha1.StorageClientInitializing + r.Log.Info("Finalizer not found for StorageClient. Adding finalizer.", "StorageClient", r.storageClient.Name) + updateStorageClient = true + } + + if updateStorageClient { + if err := r.update(r.storageClient); err != nil { + return reconcile.Result{}, fmt.Errorf("failed to update StorageClient: %v", err) } } - if instance.Status.ConsumerID == "" { - return s.onboardConsumer(instance, externalClusterClient) - } else if instance.Status.Phase == v1alpha1.StorageClientOnboarding { - return s.acknowledgeOnboarding(instance, externalClusterClient) + if r.storageClient.Status.ConsumerID == "" { + return r.onboardConsumer(externalClusterClient) + } else if r.storageClient.Status.Phase == v1alpha1.StorageClientOnboarding { + return r.acknowledgeOnboarding(externalClusterClient) } - if res, err := s.reconcileClientStatusReporterJob(instance); err != nil { + if res, err := r.reconcileClientStatusReporterJob(); err != nil { return res, err } - if err := s.reconcileDefaultStorageClaims(instance); err != nil { - return reconcile.Result{}, err + if r.storageClient.GetAnnotations()[storageClaimProcessedAnnotationKey] != "true" { + if err := r.reconcileBlockStorageClaim(); err != nil { + return reconcile.Result{}, err + } + + if err := r.reconcileSharedfileStorageClaim(); err != nil { + return reconcile.Result{}, err + } + + utils.AddAnnotation(r.storageClient, storageClaimProcessedAnnotationKey, "true") + if err := r.update(r.storageClient); err != nil { + return reconcile.Result{}, fmt.Errorf("failed to update StorageClient with claim processed annotation: %v", err) + } } return reconcile.Result{}, nil } -func (s *StorageClientReconciler) deletionPhase(instance *v1alpha1.StorageClient, externalClusterClient *providerClient.OCSProviderClient) (ctrl.Result, error) { +func (r *StorageClientReconciler) deletionPhase(externalClusterClient *providerClient.OCSProviderClient) (ctrl.Result, error) { // TODO Need to take care of deleting the SCC created for this // storageClient and also the default SCC created for this storageClient - if contains(instance.GetFinalizers(), storageClientFinalizer) { - instance.Status.Phase = v1alpha1.StorageClientOffboarding - err := s.verifyNoStorageClaimsExist(instance) - if err != nil { - s.Log.Error(err, "still storageclaims exist for this storageclient") - return reconcile.Result{}, fmt.Errorf("still storageclaims exist for this storageclient: %v", err) - } - if res, err := s.offboardConsumer(instance, externalClusterClient); err != nil { - s.Log.Error(err, "Offboarding in progress.") - } else if !res.IsZero() { - // result is not empty - return res, nil - } + r.storageClient.Status.Phase = v1alpha1.StorageClientOffboarding - cronJob := &batchv1.CronJob{} - cronJob.Name = getStatusReporterName(instance.Namespace, instance.Name) - cronJob.Namespace = s.OperatorNamespace - - if err := s.delete(cronJob); err != nil { - s.Log.Error(err, "Failed to delete the status reporter job") - return reconcile.Result{}, fmt.Errorf("failed to delete the status reporter job: %v", err) - } - - if err := s.deleteDefaultStorageClaims(instance); err != nil { - return reconcile.Result{}, fmt.Errorf("failed to delete default storageclaims: %v", err) - } + if err := r.deleteOwnedStorageClaims(); err != nil { + return reconcile.Result{}, fmt.Errorf("failed to delete storageclaims owned by storageclient %v: %v", r.storageClient.Name, err) + } + if err := r.verifyNoStorageClaimsExist(); err != nil { + r.Log.Error(err, "still storageclaims exist for this storageclient") + return reconcile.Result{}, fmt.Errorf("still storageclaims exist for this storageclient: %v", err) + } + if res, err := r.offboardConsumer(externalClusterClient); err != nil { + r.Log.Error(err, "Offboarding in progress.") + } else if !res.IsZero() { + // result is not empty + return res, nil + } - s.Log.Info("removing finalizer from StorageClient.", "StorageClient", klog.KRef(instance.Namespace, instance.Name)) - // Once all finalizers have been removed, the object will be deleted - instance.ObjectMeta.Finalizers = remove(instance.ObjectMeta.Finalizers, storageClientFinalizer) - if err := s.Client.Update(s.ctx, instance); err != nil { - s.Log.Info("Failed to remove finalizer from StorageClient", "StorageClient", klog.KRef(instance.Namespace, instance.Name)) + if controllerutil.RemoveFinalizer(r.storageClient, storageClientFinalizer) { + r.Log.Info("removing finalizer from StorageClient.", "StorageClient", r.storageClient.Name) + if err := r.update(r.storageClient); err != nil { + r.Log.Info("Failed to remove finalizer from StorageClient", "StorageClient", r.storageClient.Name) return reconcile.Result{}, fmt.Errorf("failed to remove finalizer from StorageClient: %v", err) } } - s.Log.Info("StorageClient is offboarded", "StorageClient", klog.KRef(instance.Namespace, instance.Name)) + r.Log.Info("StorageClient is offboarded", "StorageClient", r.storageClient.Name) return reconcile.Result{}, nil } // newExternalClusterClient returns the *providerClient.OCSProviderClient -func (s *StorageClientReconciler) newExternalClusterClient(instance *v1alpha1.StorageClient) (*providerClient.OCSProviderClient, error) { +func (r *StorageClientReconciler) newExternalClusterClient() (*providerClient.OCSProviderClient, error) { ocsProviderClient, err := providerClient.NewProviderClient( - s.ctx, instance.Spec.StorageProviderEndpoint, time.Second*10) + r.ctx, r.storageClient.Spec.StorageProviderEndpoint, time.Second*10) if err != nil { return nil, fmt.Errorf("failed to create a new provider client: %v", err) } @@ -289,21 +249,21 @@ func (s *StorageClientReconciler) newExternalClusterClient(instance *v1alpha1.St } // onboardConsumer makes an API call to the external storage provider cluster for onboarding -func (s *StorageClientReconciler) onboardConsumer(instance *v1alpha1.StorageClient, externalClusterClient *providerClient.OCSProviderClient) (reconcile.Result, error) { +func (r *StorageClientReconciler) onboardConsumer(externalClusterClient *providerClient.OCSProviderClient) (reconcile.Result, error) { // TODO Need to find a way to get rid of ClusterVersion here as it is OCP // specific one. clusterVersion := &configv1.ClusterVersion{} - err := s.Client.Get(s.ctx, types.NamespacedName{Name: "version"}, clusterVersion) - if err != nil { - s.Log.Error(err, "failed to get the clusterVersion version of the OCP cluster") + clusterVersion.Name = "version" + if err := r.get(clusterVersion); err != nil { + r.Log.Error(err, "failed to get the clusterVersion version of the OCP cluster") return reconcile.Result{}, fmt.Errorf("failed to get the clusterVersion version of the OCP cluster: %v", err) } // TODO Have a version file corresponding to the release csvList := opv1a1.ClusterServiceVersionList{} - if err = s.list(&csvList, client.InNamespace(s.OperatorNamespace)); err != nil { - return reconcile.Result{}, fmt.Errorf("failed to list csv resources in ns: %v, err: %v", s.OperatorNamespace, err) + if err := r.list(&csvList, client.InNamespace(r.OperatorNamespace)); err != nil { + return reconcile.Result{}, fmt.Errorf("failed to list csv resources in ns: %v, err: %v", r.OperatorNamespace, err) } csv := utils.Find(csvList.Items, func(csv *opv1a1.ClusterServiceVersion) bool { return strings.HasPrefix(csv.Name, csvPrefix) @@ -314,52 +274,52 @@ func (s *StorageClientReconciler) onboardConsumer(instance *v1alpha1.StorageClie name := fmt.Sprintf("storageconsumer-%s", clusterVersion.Spec.ClusterID) onboardRequest := providerClient.NewOnboardConsumerRequest(). SetConsumerName(name). - SetOnboardingTicket(instance.Spec.OnboardingTicket). + SetOnboardingTicket(r.storageClient.Spec.OnboardingTicket). SetClientOperatorVersion(csv.Spec.Version.String()) - response, err := externalClusterClient.OnboardConsumer(s.ctx, onboardRequest) + response, err := externalClusterClient.OnboardConsumer(r.ctx, onboardRequest) if err != nil { if st, ok := status.FromError(err); ok { - s.logGrpcErrorAndReportEvent(instance, OnboardConsumer, err, st.Code()) + r.logGrpcErrorAndReportEvent(OnboardConsumer, err, st.Code()) } return reconcile.Result{}, fmt.Errorf("failed to onboard consumer: %v", err) } if response.StorageConsumerUUID == "" { err = fmt.Errorf("storage provider response is empty") - s.Log.Error(err, "empty response") + r.Log.Error(err, "empty response") return reconcile.Result{}, err } - instance.Status.ConsumerID = response.StorageConsumerUUID - instance.Status.Phase = v1alpha1.StorageClientOnboarding + r.storageClient.Status.ConsumerID = response.StorageConsumerUUID + r.storageClient.Status.Phase = v1alpha1.StorageClientOnboarding - s.Log.Info("onboarding started") + r.Log.Info("onboarding started") return reconcile.Result{Requeue: true}, nil } -func (s *StorageClientReconciler) acknowledgeOnboarding(instance *v1alpha1.StorageClient, externalClusterClient *providerClient.OCSProviderClient) (reconcile.Result, error) { +func (r *StorageClientReconciler) acknowledgeOnboarding(externalClusterClient *providerClient.OCSProviderClient) (reconcile.Result, error) { - _, err := externalClusterClient.AcknowledgeOnboarding(s.ctx, instance.Status.ConsumerID) + _, err := externalClusterClient.AcknowledgeOnboarding(r.ctx, r.storageClient.Status.ConsumerID) if err != nil { if st, ok := status.FromError(err); ok { - s.logGrpcErrorAndReportEvent(instance, AcknowledgeOnboarding, err, st.Code()) + r.logGrpcErrorAndReportEvent(AcknowledgeOnboarding, err, st.Code()) } - s.Log.Error(err, "Failed to acknowledge onboarding.") + r.Log.Error(err, "Failed to acknowledge onboarding.") return reconcile.Result{}, fmt.Errorf("failed to acknowledge onboarding: %v", err) } - instance.Status.Phase = v1alpha1.StorageClientConnected + r.storageClient.Status.Phase = v1alpha1.StorageClientConnected - s.Log.Info("Onboarding is acknowledged successfully.") + r.Log.Info("Onboarding is acknowledged successfully.") return reconcile.Result{Requeue: true}, nil } // offboardConsumer makes an API call to the external storage provider cluster for offboarding -func (s *StorageClientReconciler) offboardConsumer(instance *v1alpha1.StorageClient, externalClusterClient *providerClient.OCSProviderClient) (reconcile.Result, error) { +func (r *StorageClientReconciler) offboardConsumer(externalClusterClient *providerClient.OCSProviderClient) (reconcile.Result, error) { - _, err := externalClusterClient.OffboardConsumer(s.ctx, instance.Status.ConsumerID) + _, err := externalClusterClient.OffboardConsumer(r.ctx, r.storageClient.Status.ConsumerID) if err != nil { if st, ok := status.FromError(err); ok { - s.logGrpcErrorAndReportEvent(instance, OffboardConsumer, err, st.Code()) + r.logGrpcErrorAndReportEvent(OffboardConsumer, err, st.Code()) } return reconcile.Result{}, fmt.Errorf("failed to offboard consumer: %v", err) } @@ -367,29 +327,43 @@ func (s *StorageClientReconciler) offboardConsumer(instance *v1alpha1.StorageCli return reconcile.Result{}, nil } -func (s *StorageClientReconciler) verifyNoStorageClaimsExist(instance *v1alpha1.StorageClient) error { +func (r *StorageClientReconciler) deleteOwnedStorageClaims() error { + storageClaims := &v1alpha1.StorageClaimList{} + if err := r.list(storageClaims, client.MatchingFields{ownerIndexName: string(r.storageClient.UID)}); err != nil { + return fmt.Errorf("failed to list storageClaims via owner reference: %v", err) + } + + for idx := range storageClaims.Items { + storageClaim := &storageClaims.Items[idx] + if err := r.delete(storageClaim); err != nil { + return fmt.Errorf("failed to delete storageClaim %v: %v", storageClaim.Name, err) + } + } + return nil +} + +func (r *StorageClientReconciler) verifyNoStorageClaimsExist() error { storageClaims := &v1alpha1.StorageClaimList{} - err := s.Client.List(s.ctx, - storageClaims, - client.MatchingFields{storageClientAnnotationIndexName: client.ObjectKeyFromObject(instance).String()}, - client.Limit(1), - ) - if err != nil { + if err := r.list(storageClaims); err != nil { return fmt.Errorf("failed to list storageClaims: %v", err) } - if len(storageClaims.Items) != 0 { - err = fmt.Errorf("Failed to cleanup resources. storageClaims are present."+ - "Delete all storageClaims corresponding to storageclient %q for the cleanup to proceed", client.ObjectKeyFromObject(instance)) - s.recorder.ReportIfNotPresent(instance, corev1.EventTypeWarning, "Cleanup", err.Error()) - s.Log.Error(err, "Waiting for all storageClaims to be deleted.") - return err + for idx := range storageClaims.Items { + storageClaim := &storageClaims.Items[idx] + if (storageClaim.Spec.StorageClient == "" && r.storageClient.Annotations[storageClientDefaultAnnotationKey] == "true") || + storageClaim.Spec.StorageClient == r.storageClient.Name { + err := fmt.Errorf("failed to cleanup resources. storageClaims are present on the cluster") + r.recorder.ReportIfNotPresent(r.storageClient, corev1.EventTypeWarning, "Cleanup", err.Error()) + r.Log.Error(err, "Waiting for all storageClaims to be deleted.") + return err + } } return nil } -func (s *StorageClientReconciler) logGrpcErrorAndReportEvent(instance *v1alpha1.StorageClient, grpcCallName string, err error, errCode codes.Code) { + +func (r *StorageClientReconciler) logGrpcErrorAndReportEvent(grpcCallName string, err error, errCode codes.Code) { var msg, eventReason, eventType string @@ -432,51 +406,28 @@ func (s *StorageClientReconciler) logGrpcErrorAndReportEvent(instance *v1alpha1. } if msg != "" { - s.Log.Error(err, "StorageProvider:"+grpcCallName+":"+msg) - s.recorder.ReportIfNotPresent(instance, eventType, eventReason, msg) - } -} - -func getStatusReporterName(namespace, name string) string { - // getStatusReporterName generates a name for a StatusReporter CronJob. - var s struct { - StorageClientName string `json:"storageClientName"` - StorageClientNamespace string `json:"storageClientNamespace"` - } - s.StorageClientName = name - s.StorageClientNamespace = namespace - - statusReporterName, err := json.Marshal(s) - if err != nil { - klog.Errorf("failed to marshal a name for a storage client based on %v. %v", s, err) - panic("failed to marshal storage client name") - } - reporterName := md5.Sum([]byte(statusReporterName)) - // The name of the StorageClient is the MD5 hash of the JSON - // representation of the StorageClient name and namespace. - return fmt.Sprintf("storageclient-%s-status-reporter", hex.EncodeToString(reporterName[:8])) -} - -func (s *StorageClientReconciler) delete(obj client.Object) error { - if err := s.Client.Delete(s.ctx, obj); err != nil && !kerrors.IsNotFound(err) { - return err + r.Log.Error(err, "StorageProvider:"+grpcCallName+":"+msg) + r.recorder.ReportIfNotPresent(r.storageClient, eventType, eventReason, msg) } - return nil } -func (s *StorageClientReconciler) reconcileClientStatusReporterJob(instance *v1alpha1.StorageClient) (reconcile.Result, error) { - // start the cronJob to ping the provider api server +func (r *StorageClientReconciler) reconcileClientStatusReporterJob() (reconcile.Result, error) { cronJob := &batchv1.CronJob{} - cronJob.Name = getStatusReporterName(instance.Namespace, instance.Name) - cronJob.Namespace = s.OperatorNamespace - utils.AddLabel(cronJob, storageClientNameLabel, instance.Name) - utils.AddLabel(cronJob, storageClientNamespaceLabel, instance.Namespace) + // maximum characters allowed for cronjob name is 52 and below interpolation creates 47 characters + cronJob.Name = fmt.Sprintf("storageclient-%s-status-reporter", getMD5Hash(r.storageClient.Name)[:16]) + cronJob.Namespace = r.OperatorNamespace + var podDeadLineSeconds int64 = 120 jobDeadLineSeconds := podDeadLineSeconds + 35 var keepJobResourceSeconds int32 = 600 var reducedKeptSuccecsful int32 = 1 - _, err := controllerutil.CreateOrUpdate(s.ctx, s.Client, cronJob, func() error { + _, err := controllerutil.CreateOrUpdate(r.ctx, r.Client, cronJob, func() error { + if err := r.own(cronJob); err != nil { + return fmt.Errorf("failed to own cronjob: %v", err) + } + // this helps during listing of cronjob by labels corresponding to the storageclient + utils.AddLabel(cronJob, storageClientNameLabel, r.storageClient.Name) cronJob.Spec = batchv1.CronJobSpec{ Schedule: "* * * * *", ConcurrencyPolicy: batchv1.ForbidConcurrent, @@ -496,17 +447,13 @@ func (s *StorageClientReconciler) reconcileClientStatusReporterJob(instance *v1a "/status-reporter", }, Env: []corev1.EnvVar{ - { - Name: utils.StorageClientNamespaceEnvVar, - Value: instance.Namespace, - }, { Name: utils.StorageClientNameEnvVar, - Value: instance.Name, + Value: r.storageClient.Name, }, { Name: utils.OperatorNamespaceEnvVar, - Value: s.OperatorNamespace, + Value: r.OperatorNamespace, }, }, }, @@ -526,98 +473,58 @@ func (s *StorageClientReconciler) reconcileClientStatusReporterJob(instance *v1a return reconcile.Result{}, nil } -func (s *StorageClientReconciler) list(obj client.ObjectList, listOptions ...client.ListOption) error { - return s.Client.List(s.ctx, obj, listOptions...) +func (r *StorageClientReconciler) list(obj client.ObjectList, listOptions ...client.ListOption) error { + return r.Client.List(r.ctx, obj, listOptions...) } -func (s *StorageClientReconciler) reconcileDefaultStorageClaims(instance *v1alpha1.StorageClient) error { - - if instance.GetAnnotations()[defaultClaimsProcessedAnnotationKey] == "true" { - // we already processed default claims for this client - return nil - } - - // try to list the default client who is the default storage claims owner - claimOwners := &v1alpha1.StorageClientList{} - if err := s.list(claimOwners, client.MatchingFields{defaultClaimsOwnerIndexName: "true"}); err != nil { - return fmt.Errorf("failed to list default storage claims owner: %v", err) +func (r *StorageClientReconciler) reconcileBlockStorageClaim() error { + blockClaim := &v1alpha1.StorageClaim{} + blockClaim.Name = fmt.Sprintf("%s-ceph-rbd", r.storageClient.Name) + blockClaim.Spec.Type = "block" + blockClaim.Spec.StorageClient = r.storageClient.Name + if err := r.own(blockClaim); err != nil { + return fmt.Errorf("failed to own storageclaim of type block: %v", err) } - - if len(claimOwners.Items) == 0 { - // no other storageclient claims as an owner and take responsibility of creating the default claims by becoming owner - if utils.AddAnnotation(instance, defaultClaimsOwnerAnnotationKey, "true") { - if err := s.update(instance); err != nil { - return fmt.Errorf("not able to claim ownership of creating default storageclaims: %v", err) - } - } + if err := r.create(blockClaim); err != nil && !kerrors.IsAlreadyExists(err) { + return fmt.Errorf("failed to create block storageclaim: %v", err) } + return nil +} - // we successfully took the ownership to create a default claim from this storageclient, so create default claims if not created - // after claiming as an owner no other storageclient will try to take ownership for creating default storageclaims - annotations := instance.GetAnnotations() - if annotations[defaultClaimsOwnerAnnotationKey] == "true" && annotations[defaultClaimsProcessedAnnotationKey] != "true" { - if err := s.createDefaultBlockStorageClaim(instance); err != nil && !kerrors.IsAlreadyExists(err) { - return fmt.Errorf("failed to create %q storageclaim: %v", defaultBlockStorageClaim, err) - } - if err := s.createDefaultSharedfileStorageClaim(instance); err != nil && !kerrors.IsAlreadyExists(err) { - return fmt.Errorf("failed to create %q storageclaim: %v", defaultSharedfileStorageClaim, err) - } +func (r *StorageClientReconciler) reconcileSharedfileStorageClaim() error { + sharedfileClaim := &v1alpha1.StorageClaim{} + sharedfileClaim.Name = fmt.Sprintf("%s-cephfs", r.storageClient.Name) + sharedfileClaim.Spec.Type = "sharedfile" + sharedfileClaim.Spec.StorageClient = r.storageClient.Name + if err := r.own(sharedfileClaim); err != nil { + return fmt.Errorf("failed to own storageclaim of type sharedfile: %v", err) } - - // annotate that we created default storageclaims successfully and will not retry - if utils.AddAnnotation(instance, defaultClaimsProcessedAnnotationKey, "true") { - if err := s.update(instance); err != nil { - return fmt.Errorf("not able to update annotation for creation of default storageclaims: %v", err) - } + if err := r.create(sharedfileClaim); err != nil && !kerrors.IsAlreadyExists(err) { + return fmt.Errorf("failed to create sharedfile storageclaim: %v", err) } - return nil } -func (s *StorageClientReconciler) createDefaultBlockStorageClaim(instance *v1alpha1.StorageClient) error { - storageclaim := &v1alpha1.StorageClaim{} - storageclaim.Name = defaultBlockStorageClaim - storageclaim.Spec.Type = "block" - storageclaim.Spec.StorageClient = &v1alpha1.StorageClientNamespacedName{ - Name: instance.Name, - Namespace: instance.Namespace, - } - return s.create(storageclaim) +func (r *StorageClientReconciler) get(obj client.Object, opts ...client.GetOption) error { + key := client.ObjectKeyFromObject(obj) + return r.Get(r.ctx, key, obj, opts...) } -func (s *StorageClientReconciler) createDefaultSharedfileStorageClaim(instance *v1alpha1.StorageClient) error { - sharedfileClaim := &v1alpha1.StorageClaim{} - sharedfileClaim.Name = defaultSharedfileStorageClaim - sharedfileClaim.Spec.Type = "sharedfile" - sharedfileClaim.Spec.StorageClient = &v1alpha1.StorageClientNamespacedName{ - Name: instance.Name, - Namespace: instance.Namespace, - } - return s.create(sharedfileClaim) +func (r *StorageClientReconciler) update(obj client.Object, opts ...client.UpdateOption) error { + return r.Update(r.ctx, obj, opts...) } -func (s *StorageClientReconciler) deleteDefaultStorageClaims(instance *v1alpha1.StorageClient) error { - if instance.GetAnnotations()[defaultClaimsOwnerAnnotationKey] == "true" { - blockClaim := &v1alpha1.StorageClaim{} - blockClaim.Name = defaultBlockStorageClaim - if err := s.delete(blockClaim); err != nil { - return fmt.Errorf("failed to remove default storageclaim %q: %v", blockClaim.Name, err) - } +func (r *StorageClientReconciler) create(obj client.Object, opts ...client.CreateOption) error { + return r.Create(r.ctx, obj, opts...) +} - sharedfsClaim := &v1alpha1.StorageClaim{} - sharedfsClaim.Name = defaultSharedfileStorageClaim - if err := s.delete(sharedfsClaim); err != nil { - return fmt.Errorf("failed to remove default storageclaim %q: %v", blockClaim.Name, err) - } - s.Log.Info("Successfully deleted default storageclaims") +func (r *StorageClientReconciler) delete(obj client.Object, opts ...client.DeleteOption) error { + if err := r.Delete(r.ctx, obj, opts...); err != nil && !kerrors.IsNotFound(err) { + return err } return nil } -func (s *StorageClientReconciler) update(obj client.Object, opts ...client.UpdateOption) error { - return s.Update(s.ctx, obj, opts...) -} - -func (s *StorageClientReconciler) create(obj client.Object, opts ...client.CreateOption) error { - return s.Create(s.ctx, obj, opts...) +func (r *StorageClientReconciler) own(dependent metav1.Object) error { + return controllerutil.SetOwnerReference(r.storageClient, dependent, r.Scheme) } diff --git a/go.mod b/go.mod index 68b1cce6..ee1e8512 100644 --- a/go.mod +++ b/go.mod @@ -13,22 +13,22 @@ require ( github.com/go-logr/logr v1.4.1 github.com/kubernetes-csi/external-snapshotter/client/v6 v6.3.0 github.com/onsi/ginkgo v1.16.5 - github.com/onsi/gomega v1.30.0 - github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 - github.com/operator-framework/api v0.20.0 + github.com/onsi/gomega v1.32.0 + github.com/openshift/api v0.0.0-20240323003854-2252c7adfb79 + github.com/operator-framework/api v0.22.0 github.com/pkg/errors v0.9.1 - github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.70.0 - github.com/red-hat-storage/ocs-operator/v4 v4.0.0-20240325171742-7a2177d09b00 + github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0 + github.com/red-hat-storage/ocs-operator/v4 v4.0.0-20240422111920-faced96485bc github.com/stretchr/testify v1.9.0 - google.golang.org/grpc v1.60.0 + google.golang.org/grpc v1.62.1 gopkg.in/yaml.v2 v2.4.0 - k8s.io/api v0.29.2 - k8s.io/apiextensions-apiserver v0.28.4 - k8s.io/apimachinery v0.29.2 - k8s.io/client-go v0.29.2 + k8s.io/api v0.29.3 + k8s.io/apiextensions-apiserver v0.29.2 + k8s.io/apimachinery v0.29.3 + k8s.io/client-go v0.29.3 k8s.io/klog/v2 v2.120.1 - k8s.io/utils v0.0.0-20240102154912-e7106e64919e - sigs.k8s.io/controller-runtime v0.16.3 + k8s.io/utils v0.0.0-20240310230437-4693a0247e57 + sigs.k8s.io/controller-runtime v0.17.2 ) require ( @@ -37,7 +37,7 @@ require ( github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/emicklei/go-restful/v3 v3.11.3 // indirect - github.com/evanphx/json-patch/v5 v5.7.0 // indirect + github.com/evanphx/json-patch/v5 v5.8.0 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/go-logr/zapr v1.3.0 // indirect github.com/go-openapi/jsonpointer v0.20.3 // indirect @@ -45,7 +45,7 @@ require ( github.com/go-openapi/swag v0.22.10 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect - github.com/golang/protobuf v1.5.3 // indirect + github.com/golang/protobuf v1.5.4 // indirect github.com/google/gnostic-models v0.6.8 // indirect github.com/google/go-cmp v0.6.0 // indirect github.com/google/gofuzz v1.2.0 // indirect @@ -60,7 +60,7 @@ require ( github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/nxadm/tail v1.4.8 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - github.com/prometheus/client_golang v1.17.0 // indirect + github.com/prometheus/client_golang v1.18.0 // indirect github.com/prometheus/client_model v0.5.0 // indirect github.com/prometheus/common v0.45.0 // indirect github.com/prometheus/procfs v0.12.0 // indirect @@ -77,12 +77,12 @@ require ( golang.org/x/time v0.5.0 // indirect gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect google.golang.org/appengine v1.6.8 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20231002182017-d307bd883b97 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 // indirect google.golang.org/protobuf v1.33.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/component-base v0.28.4 // indirect + k8s.io/component-base v0.29.2 // indirect k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect diff --git a/go.sum b/go.sum index 128a3b21..916dd55b 100644 --- a/go.sum +++ b/go.sum @@ -12,8 +12,8 @@ github.com/emicklei/go-restful/v3 v3.11.3 h1:yagOQz/38xJmcNeZJtrUcKjkHRltIaIFXKW github.com/emicklei/go-restful/v3 v3.11.3/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= -github.com/evanphx/json-patch/v5 v5.7.0 h1:nJqP7uwL84RJInrohHfW0Fx3awjbm8qZeFv0nW9SYGc= -github.com/evanphx/json-patch/v5 v5.7.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ= +github.com/evanphx/json-patch/v5 v5.8.0 h1:lRj6N9Nci7MvzrXuX6HFzU8XjmhPiXPlsKEy1u0KQro= +github.com/evanphx/json-patch/v5 v5.8.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= @@ -44,8 +44,8 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= -github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= @@ -95,33 +95,33 @@ github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE= github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU= -github.com/onsi/ginkgo/v2 v2.14.0 h1:vSmGj2Z5YPb9JwCWT6z6ihcUvDhuXLc3sJiqd3jMKAY= -github.com/onsi/ginkgo/v2 v2.14.0/go.mod h1:JkUdW7JkN0V6rFvsHcJ478egV3XH9NxpD27Hal/PhZw= +github.com/onsi/ginkgo/v2 v2.17.1 h1:V++EzdbhI4ZV4ev0UTIj0PzhzOcReJFyJaLjtSF55M8= +github.com/onsi/ginkgo/v2 v2.17.1/go.mod h1:llBI3WDLL9Z6taip6f33H76YcWtJv+7R3HigUjbIBOs= github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= -github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= -github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= -github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 h1:+S998xHiJApsJZjRAO8wyedU9GfqFd8mtwWly6LqHDo= -github.com/openshift/api v0.0.0-20240301093301-ce10821dc999/go.mod h1:CxgbWAlvu2iQB0UmKTtRu1YfepRg1/vJ64n2DlIEVz4= -github.com/operator-framework/api v0.20.0 h1:A2YCRhr+6s0k3pRJacnwjh1Ue8BqjIGuQ2jvPg9XCB4= -github.com/operator-framework/api v0.20.0/go.mod h1:rXPOhrQ6mMeXqCmpDgt1ALoar9ZlHL+Iy5qut9R99a4= +github.com/onsi/gomega v1.32.0 h1:JRYU78fJ1LPxlckP6Txi/EYqJvjtMrDC04/MM5XRHPk= +github.com/onsi/gomega v1.32.0/go.mod h1:a4x4gW6Pz2yK1MAmvluYme5lvYTn61afQ2ETw/8n4Lg= +github.com/openshift/api v0.0.0-20240323003854-2252c7adfb79 h1:ShXEPrqDUU9rUbvoIhOmQI8D6yHQdklMUks9ZVILTNE= +github.com/openshift/api v0.0.0-20240323003854-2252c7adfb79/go.mod h1:CxgbWAlvu2iQB0UmKTtRu1YfepRg1/vJ64n2DlIEVz4= +github.com/operator-framework/api v0.22.0 h1:UZSn+iaQih4rCReezOnWTTJkMyawwV5iLnIItaOzytY= +github.com/operator-framework/api v0.22.0/go.mod h1:p/7YDbr+n4fmESfZ47yLAV1SvkfE6NU2aX8KhcfI0GA= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.70.0 h1:CFTvpkpVP4EXXZuaZuxpikAoma8xVha/IZKMDc9lw+Y= -github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.70.0/go.mod h1:npfc20mPOAu7ViOVnATVMbI7PoXvW99EzgJVqkAomIQ= -github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q= -github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0 h1:9h7PxMhT1S8lOdadEKJnBh3ELMdO60XkoDV98grYjuM= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0/go.mod h1:4FiLCL664L4dNGeqZewiiD0NS7hhqi/CxyM4UOq5dfM= +github.com/prometheus/client_golang v1.18.0 h1:HzFfmkOzH5Q8L8G+kSJKUx5dtG87sewO+FoDDqP5Tbk= +github.com/prometheus/client_golang v1.18.0/go.mod h1:T+GXkCk5wSJyOqMIzVgvvjFDlkOQntgjkJWKrN5txjA= github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw= github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI= github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM= github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY= github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= -github.com/red-hat-storage/ocs-operator/v4 v4.0.0-20240325171742-7a2177d09b00 h1:XNUvXtgCSvj/dOWKUXOmzX/a7+cCAhdFilxc+MhO0TY= -github.com/red-hat-storage/ocs-operator/v4 v4.0.0-20240325171742-7a2177d09b00/go.mod h1:cE35IBM6w9KQfLbmJBR8b/jhoitTkvjPBb/Yp6QQnKI= +github.com/red-hat-storage/ocs-operator/v4 v4.0.0-20240422111920-faced96485bc h1:bV/ttKjR3nn9jIrOSt5UOttDE6iQ6l+bzLEFPWw335M= +github.com/red-hat-storage/ocs-operator/v4 v4.0.0-20240422111920-faced96485bc/go.mod h1:e4AElguwRgtyGEW7JtfJvphjYbcYG4hlpvwDYrQFGi8= github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= @@ -137,8 +137,8 @@ github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8 github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A= -go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo= @@ -213,10 +213,10 @@ gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM= google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds= -google.golang.org/genproto/googleapis/rpc v0.0.0-20231002182017-d307bd883b97 h1:6GQBEOdGkX6MMTLT9V+TjtIRZCw9VPD5Z+yHY9wMgS0= -google.golang.org/genproto/googleapis/rpc v0.0.0-20231002182017-d307bd883b97/go.mod h1:v7nGkzlmW8P3n/bKmWBn2WpBjpOEx8Q6gMueudAmKfY= -google.golang.org/grpc v1.60.0 h1:6FQAR0kM31P6MRdeluor2w2gPaS4SVNrD/DNTxrQ15k= -google.golang.org/grpc v1.60.0/go.mod h1:OlCHIeLYqSSsLi6i49B5QGdzaMZK9+M7LXN2FKz4eGM= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 h1:AjyfHzEPEFp/NpvfN5g+KDla3EMojjhRVZc1i7cj+oM= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s= +google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk= +google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= @@ -244,24 +244,24 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -k8s.io/api v0.29.2 h1:hBC7B9+MU+ptchxEqTNW2DkUosJpp1P+Wn6YncZ474A= -k8s.io/api v0.29.2/go.mod h1:sdIaaKuU7P44aoyyLlikSLayT6Vb7bvJNCX105xZXY0= -k8s.io/apiextensions-apiserver v0.28.4 h1:AZpKY/7wQ8n+ZYDtNHbAJBb+N4AXXJvyZx6ww6yAJvU= -k8s.io/apiextensions-apiserver v0.28.4/go.mod h1:pgQIZ1U8eJSMQcENew/0ShUTlePcSGFq6dxSxf2mwPM= -k8s.io/apimachinery v0.29.2 h1:EWGpfJ856oj11C52NRCHuU7rFDwxev48z+6DSlGNsV8= -k8s.io/apimachinery v0.29.2/go.mod h1:6HVkd1FwxIagpYrHSwJlQqZI3G9LfYWRPAkUvLnXTKU= -k8s.io/client-go v0.29.2 h1:FEg85el1TeZp+/vYJM7hkDlSTFZ+c5nnK44DJ4FyoRg= -k8s.io/client-go v0.29.2/go.mod h1:knlvFZE58VpqbQpJNbCbctTVXcd35mMyAAwBdpt4jrA= -k8s.io/component-base v0.28.4 h1:c/iQLWPdUgI90O+T9TeECg8o7N3YJTiuz2sKxILYcYo= -k8s.io/component-base v0.28.4/go.mod h1:m9hR0uvqXDybiGL2nf/3Lf0MerAfQXzkfWhUY58JUbU= +k8s.io/api v0.29.3 h1:2ORfZ7+bGC3YJqGpV0KSDDEVf8hdGQ6A03/50vj8pmw= +k8s.io/api v0.29.3/go.mod h1:y2yg2NTyHUUkIoTC+phinTnEa3KFM6RZ3szxt014a80= +k8s.io/apiextensions-apiserver v0.29.2 h1:UK3xB5lOWSnhaCk0RFZ0LUacPZz9RY4wi/yt2Iu+btg= +k8s.io/apiextensions-apiserver v0.29.2/go.mod h1:aLfYjpA5p3OwtqNXQFkhJ56TB+spV8Gc4wfMhUA3/b8= +k8s.io/apimachinery v0.29.3 h1:2tbx+5L7RNvqJjn7RIuIKu9XTsIZ9Z5wX2G22XAa5EU= +k8s.io/apimachinery v0.29.3/go.mod h1:hx/S4V2PNW4OMg3WizRrHutyB5la0iCUbZym+W0EQIU= +k8s.io/client-go v0.29.3 h1:R/zaZbEAxqComZ9FHeQwOh3Y1ZUs7FaHKZdQtIc2WZg= +k8s.io/client-go v0.29.3/go.mod h1:tkDisCvgPfiRpxGnOORfkljmS+UrW+WtXAy2fTvXJB0= +k8s.io/component-base v0.29.2 h1:lpiLyuvPA9yV1aQwGLENYyK7n/8t6l3nn3zAtFTJYe8= +k8s.io/component-base v0.29.2/go.mod h1:BfB3SLrefbZXiBfbM+2H1dlat21Uewg/5qtKOl8degM= k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw= k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= -k8s.io/utils v0.0.0-20240102154912-e7106e64919e h1:eQ/4ljkx21sObifjzXwlPKpdGLrCfRziVtos3ofG/sQ= -k8s.io/utils v0.0.0-20240102154912-e7106e64919e/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -sigs.k8s.io/controller-runtime v0.16.3 h1:2TuvuokmfXvDUamSx1SuAOO3eTyye+47mJCigwG62c4= -sigs.k8s.io/controller-runtime v0.16.3/go.mod h1:j7bialYoSn142nv9sCOJmQgDXQXxnroFU4VnX/brVJ0= +k8s.io/utils v0.0.0-20240310230437-4693a0247e57 h1:gbqbevonBh57eILzModw6mrkbwM0gQBEuevE/AaBsHY= +k8s.io/utils v0.0.0-20240310230437-4693a0247e57/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +sigs.k8s.io/controller-runtime v0.17.2 h1:FwHwD1CTUemg0pW2otk7/U5/i5m2ymzvOXdbeGOUvw0= +sigs.k8s.io/controller-runtime v0.17.2/go.mod h1:+MngTvIQQQhfXtwfdGw/UOQ/aIaqsYywfCINOtwMO/s= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= diff --git a/hack/build-catalog.sh b/hack/build-catalog.sh new file mode 100755 index 00000000..e7a0f799 --- /dev/null +++ b/hack/build-catalog.sh @@ -0,0 +1,36 @@ +#!/bin/bash + +rm -rf catalog +rm -rf catalog.Dockerfile + +mkdir catalog + +${OPM} render --output=yaml ${BUNDLE_IMG} > catalog/ocs-client-bundle.yaml +${OPM} render --output=yaml ${CSI_ADDONS_BUNDLE_IMG} > catalog/csi-adddons-bundle.yaml + +cat << EOF >> catalog/index.yaml +--- +defaultChannel: alpha +name: $IMAGE_NAME +schema: olm.package +--- +schema: olm.channel +package: ocs-client-operator +name: alpha +entries: + - name: $IMAGE_NAME.v$VERSION +--- +defaultChannel: alpha +name: $CSI_ADDONS_PACKAGE_NAME +schema: olm.package +--- +schema: olm.channel +package: csi-addons +name: alpha +entries: + - name: $CSI_ADDONS_PACKAGE_NAME.v$CSI_ADDONS_PACKAGE_VERSION +EOF + +${OPM} validate catalog +${OPM} generate dockerfile catalog +${IMAGE_BUILD_CMD} build --platform="linux/amd64" -t ${CATALOG_IMG} -f catalog.Dockerfile . diff --git a/hack/go-test-setup.sh b/hack/go-test-setup.sh deleted file mode 100755 index be146f6e..00000000 --- a/hack/go-test-setup.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -ENVTEST_ASSETS_DIR="${ENVTEST_ASSETS_DIR:-testbin}" -SKIP_FETCH_TOOLS="${SKIP_FETCH_TOOLS:-}" - -mkdir -p "${ENVTEST_ASSETS_DIR}" - -pushd "${ENVTEST_ASSETS_DIR}" > /dev/null - - -if [ ! -f setup-envtest.sh ]; then - curl -sSLo setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.8.3/hack/setup-envtest.sh -fi - -source setup-envtest.sh - -fetch_envtest_tools "$(pwd)" -setup_envtest_env "$(pwd)" - -popd > /dev/null diff --git a/hack/go-test.sh b/hack/go-test.sh deleted file mode 100755 index 7b0d08af..00000000 --- a/hack/go-test.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash - -SCRIPT_DIR="$(dirname "$(realpath "$0")")" - -source "${SCRIPT_DIR}/go-test-setup.sh" - -set -x - -go test -coverprofile cover.out `go list ./... | grep -v "e2e"` diff --git a/hack/make-bundle-vars.mk b/hack/make-bundle-vars.mk index 7102623c..0727b0e1 100644 --- a/hack/make-bundle-vars.mk +++ b/hack/make-bundle-vars.mk @@ -3,7 +3,7 @@ # To re-generate a bundle for another specific version without changing the standard setup, you can: # - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2) # - use environment variables to overwrite this value (e.g export VERSION=0.0.2) -VERSION ?= 4.12.0 +VERSION ?= 4.16.0 # DEFAULT_CHANNEL defines the default channel used in the bundle. # Add a new line here if you would like to change its default config. (E.g DEFAULT_CHANNEL = "stable") @@ -49,7 +49,7 @@ IMAGE_TAG ?= latest IMAGE_NAME ?= ocs-client-operator BUNDLE_IMAGE_NAME ?= $(IMAGE_NAME)-bundle CSI_ADDONS_BUNDLE_IMAGE_NAME ?= k8s-bundle -CSI_ADDONS_BUNDLE_IMAGE_TAG ?= v0.5.0 +CSI_ADDONS_BUNDLE_IMAGE_TAG ?= v0.8.0 CATALOG_IMAGE_NAME ?= $(IMAGE_NAME)-catalog OCS_CLIENT_CONSOLE_IMG_NAME ?= ocs-client-console @@ -99,7 +99,7 @@ endif # csi-addons dependencies CSI_ADDONS_PACKAGE_NAME ?= csi-addons -CSI_ADDONS_PACKAGE_VERSION ?= "0.5.0" +CSI_ADDONS_PACKAGE_VERSION ?= 0.8.0 ## CSI driver images # The following variables define the default CSI container images to deploy diff --git a/hack/make-project-vars.mk b/hack/make-project-vars.mk index b572c4a3..9e9db8ec 100644 --- a/hack/make-project-vars.mk +++ b/hack/make-project-vars.mk @@ -1,12 +1,21 @@ PROJECT_DIR := $(PWD) BIN_DIR := $(PROJECT_DIR)/bin -ENVTEST_ASSETS_DIR := $(PROJECT_DIR)/testbin GOROOT ?= $(shell go env GOROOT) GOBIN ?= $(BIN_DIR) -GOOS ?= linux -GOARCH ?= amd64 +GOOS ?= $(shell go env GOOS) +GOARCH ?= $(shell go env GOARCH) GO_LINT_IMG_LOCATION ?= golangci/golangci-lint GO_LINT_IMG_TAG ?= v1.56.2 GO_LINT_IMG ?= $(GO_LINT_IMG_LOCATION):$(GO_LINT_IMG_TAG) + +ENVTEST_K8S_VERSION?=1.26 + +ifeq ($(IMAGE_BUILD_CMD),) +IMAGE_BUILD_CMD := $(shell command -v docker || echo "") +endif + +ifeq ($(IMAGE_BUILD_CMD),) +IMAGE_BUILD_CMD := $(shell command -v podman || echo "") +endif diff --git a/hack/make-tools.mk b/hack/make-tools.mk index 7ced7c87..31708cab 100644 --- a/hack/make-tools.mk +++ b/hack/make-tools.mk @@ -2,7 +2,7 @@ define go-get-tool @[ -f $(1) ] || { \ echo "Downloading $(2)" ;\ -GOBIN=$(PROJECT_DIR)/bin go install $(2) ;\ +$(shell GOBIN=$(PROJECT_DIR)/bin go install $(2)) \ } endef @@ -31,4 +31,8 @@ operator-sdk: ## Download operator-sdk locally if necessary. .PHONY: opm OPM = $(BIN_DIR)/opm opm: ## Download opm locally if necessary. - @./hack/get-tool.sh $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.15.1/$(GOOS)-$(GOARCH)-opm + @./hack/get-tool.sh $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.20.0/$(GOOS)-$(GOARCH)-opm + +ENVTEST ?= $(BIN_DIR)/setup-envtest +envtest: ## Download envtest-setup locally if necessary. + $(call go-get-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest@latest) diff --git a/main.go b/main.go index 62cf1e2b..8e372553 100644 --- a/main.go +++ b/main.go @@ -17,7 +17,6 @@ limitations under the License. package main import ( - "context" "flag" "os" @@ -124,12 +123,6 @@ func main() { os.Exit(1) } - // apiclient.New() returns a client without cache. - // cache is not initialized before mgr.Start() - // we need this because we need to interact with OperatorCondition - apiClient, err := client.New(mgr.GetConfig(), client.Options{ - Scheme: mgr.GetScheme(), - }) if err != nil { setupLog.Error(err, "Unable to get Client") os.Exit(1) @@ -178,14 +171,6 @@ func main() { os.Exit(1) } - if err = (&controllers.StorageClassClaimMigrationReconciler{ - Client: mgr.GetClient(), - Scheme: mgr.GetScheme(), - }).SetupWithManager(mgr); err != nil { - setupLog.Error(err, "unable to create controller", "controller", "StorageClassClaim") - os.Exit(1) - } - if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil { setupLog.Error(err, "unable to set up health check") os.Exit(1) @@ -195,20 +180,13 @@ func main() { os.Exit(1) } - operatorDeployment, err := utils.GetOperatorDeployment(context.TODO(), apiClient) - if err != nil { - setupLog.Error(err, "unable to get operator deployment") - os.Exit(1) - } - - if err = (&controllers.ClusterVersionReconciler{ - Client: mgr.GetClient(), - Scheme: mgr.GetScheme(), - OperatorDeployment: operatorDeployment, - OperatorNamespace: utils.GetOperatorNamespace(), - ConsolePort: int32(consolePort), + if err = (&controllers.OperatorConfigMapReconciler{ + Client: mgr.GetClient(), + Scheme: mgr.GetScheme(), + OperatorNamespace: utils.GetOperatorNamespace(), + ConsolePort: int32(consolePort), }).SetupWithManager(mgr); err != nil { - setupLog.Error(err, "unable to create controller", "controller", "ClusterVersionReconciler") + setupLog.Error(err, "unable to create controller", "controller", "OperatorConfigMapReconciler") os.Exit(1) } diff --git a/pkg/csi/csidriver.go b/pkg/csi/csidriver.go index 18784ab0..dad6f59e 100644 --- a/pkg/csi/csidriver.go +++ b/pkg/csi/csidriver.go @@ -54,9 +54,6 @@ func CreateCSIDriver(ctx context.Context, client client.Client, csiDriver *v1k8s return err } -// TODO need to check how to delete the csidriver object - -//nolint:deadcode,unused func DeleteCSIDriver(ctx context.Context, client client.Client, name string) error { csiDriver := &v1k8scsi.CSIDriver{ ObjectMeta: metav1.ObjectMeta{ diff --git a/pkg/templates/webhookservice.go b/pkg/templates/webhookservice.go new file mode 100644 index 00000000..ab5616af --- /dev/null +++ b/pkg/templates/webhookservice.go @@ -0,0 +1,29 @@ +package templates + +import ( + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/intstr" +) + +const ( + // should be <.metadata.name from config/manager/webhook_service.yaml> + WebhookServiceName = "ocs-client-operator-webhook-server" +) + +// should match the spec at config/manager/webhook_service.yaml +var WebhookService = corev1.Service{ + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "https", + Port: 443, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromInt32(7443), + }, + }, + Selector: map[string]string{ + "app": "ocs-client-operator", + }, + Type: corev1.ServiceTypeClusterIP, + }, +} diff --git a/pkg/utils/deployment.go b/pkg/utils/deployment.go deleted file mode 100644 index 37008eec..00000000 --- a/pkg/utils/deployment.go +++ /dev/null @@ -1,40 +0,0 @@ -/* -Copyright 2022 Red Hat, Inc. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package utils - -import ( - "context" - "os" - "strings" - - appsv1 "k8s.io/api/apps/v1" - "k8s.io/apimachinery/pkg/types" - "sigs.k8s.io/controller-runtime/pkg/client" -) - -// GetOperatorDeployment returns the operator deployment object -func GetOperatorDeployment(ctx context.Context, c client.Client) (*appsv1.Deployment, error) { - deployment := &appsv1.Deployment{} - podNameStrings := strings.Split(os.Getenv(OperatorPodNameEnvVar), "-") - deploymentName := strings.Join(podNameStrings[:len(podNameStrings)-2], "-") - err := c.Get(ctx, types.NamespacedName{Name: deploymentName, Namespace: GetOperatorNamespace()}, deployment) - if err != nil { - return nil, err - } - - return deployment, nil -} diff --git a/pkg/utils/k8sutils.go b/pkg/utils/k8sutils.go index 5d47b2c2..11dce7a8 100644 --- a/pkg/utils/k8sutils.go +++ b/pkg/utils/k8sutils.go @@ -34,9 +34,6 @@ const OperatorPodNameEnvVar = "OPERATOR_POD_NAME" // StorageClientNameEnvVar is the constant for env variable STORAGE_CLIENT_NAME const StorageClientNameEnvVar = "STORAGE_CLIENT_NAME" -// StorageClientNamespaceEnvVar is the constant for env variable STORAGE_CLIENT_NAMESPACE -const StorageClientNamespaceEnvVar = "STORAGE_CLIENT_NAMESPACE" - const StatusReporterImageEnvVar = "STATUS_REPORTER_IMAGE" // Value corresponding to annotation key has subscription channel diff --git a/service/status-report/main.go b/service/status-report/main.go index 67fb3d2e..5d62f59b 100644 --- a/service/status-report/main.go +++ b/service/status-report/main.go @@ -18,6 +18,8 @@ package main import ( "context" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/yaml" "os" "strings" "time" @@ -38,7 +40,9 @@ import ( ) const ( - csvPrefix = "ocs-client-operator" + csvPrefix = "ocs-client-operator" + clusterConfigNamespace = "kube-system" + clusterConfigName = "cluster-config-v1" ) func main() { @@ -70,11 +74,6 @@ func main() { ctx := context.Background() - storageClientNamespace, isSet := os.LookupEnv(utils.StorageClientNamespaceEnvVar) - if !isSet { - klog.Exitf("%s env var not set", utils.StorageClientNamespaceEnvVar) - } - storageClientName, isSet := os.LookupEnv(utils.StorageClientNameEnvVar) if !isSet { klog.Exitf("%s env var not set", utils.StorageClientNameEnvVar) @@ -86,15 +85,14 @@ func main() { } storageClient := &v1alpha1.StorageClient{} storageClient.Name = storageClientName - storageClient.Namespace = storageClientNamespace - if err = cl.Get(ctx, types.NamespacedName{Name: storageClient.Name, Namespace: storageClient.Namespace}, storageClient); err != nil { + if err = cl.Get(ctx, client.ObjectKeyFromObject(storageClient), storageClient); err != nil { klog.Exitf("Failed to get storageClient %q/%q: %v", storageClient.Namespace, storageClient.Name, err) } var oprVersion string csvList := opv1a1.ClusterServiceVersionList{} - if err = cl.List(ctx, &csvList, client.InNamespace(storageClientNamespace)); err != nil { + if err = cl.List(ctx, &csvList, client.InNamespace(operatorNamespace)); err != nil { klog.Warningf("Failed to list csv resources: %v", err) } else { item := utils.Find(csvList.Items, func(csv *opv1a1.ClusterServiceVersion) bool { @@ -127,9 +125,27 @@ func main() { klog.Warningf("Unable to find ocp version with completed update") } - namespacedName := types.NamespacedName{ - Namespace: storageClient.Namespace, - Name: storageClient.Name, + clusterConfig := &corev1.ConfigMap{} + clusterConfig.Name = clusterConfigName + clusterConfig.Namespace = clusterConfigNamespace + + if err = cl.Get(ctx, client.ObjectKeyFromObject(clusterConfig), clusterConfig); err != nil { + klog.Warningf("Failed to get clusterConfig %q/%q: %v", clusterConfig.Namespace, clusterConfig.Name, err) + } + + clusterMetadataYAML := clusterConfig.Data["install-config"] + clusterMetadata := struct { + Metadata struct { + Name string `yaml:"name"` + } `yaml:"metadata"` + }{} + err = yaml.Unmarshal([]byte(clusterMetadataYAML), &clusterMetadata) + if err != nil { + klog.Warningf("Fatal error, %v", err) + } + clusterName := "" + if len(clusterMetadata.Metadata.Name) > 0 { + clusterName = clusterMetadata.Metadata.Name } providerClient, err := providerclient.NewProviderClient( @@ -146,7 +162,8 @@ func main() { SetPlatformVersion(pltVersion). SetOperatorVersion(oprVersion). SetClusterID(string(clusterID)). - SetNamespacedName(namespacedName.String()) + SetClusterName(clusterName). + SetClientName(storageClientName) statusResponse, err := providerClient.ReportStatus(ctx, storageClient.Status.ConsumerID, status) if err != nil { klog.Exitf("Failed to report status of storageClient %v: %v", storageClient.Status.ConsumerID, err) diff --git a/vendor/github.com/evanphx/json-patch/v5/internal/json/decode.go b/vendor/github.com/evanphx/json-patch/v5/internal/json/decode.go new file mode 100644 index 00000000..e9bb0efe --- /dev/null +++ b/vendor/github.com/evanphx/json-patch/v5/internal/json/decode.go @@ -0,0 +1,1385 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Represents JSON data structure using native Go types: booleans, floats, +// strings, arrays, and maps. + +package json + +import ( + "encoding" + "encoding/base64" + "fmt" + "reflect" + "strconv" + "strings" + "sync" + "unicode" + "unicode/utf16" + "unicode/utf8" +) + +// Unmarshal parses the JSON-encoded data and stores the result +// in the value pointed to by v. If v is nil or not a pointer, +// Unmarshal returns an InvalidUnmarshalError. +// +// Unmarshal uses the inverse of the encodings that +// Marshal uses, allocating maps, slices, and pointers as necessary, +// with the following additional rules: +// +// To unmarshal JSON into a pointer, Unmarshal first handles the case of +// the JSON being the JSON literal null. In that case, Unmarshal sets +// the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into +// the value pointed at by the pointer. If the pointer is nil, Unmarshal +// allocates a new value for it to point to. +// +// To unmarshal JSON into a value implementing the Unmarshaler interface, +// Unmarshal calls that value's UnmarshalJSON method, including +// when the input is a JSON null. +// Otherwise, if the value implements encoding.TextUnmarshaler +// and the input is a JSON quoted string, Unmarshal calls that value's +// UnmarshalText method with the unquoted form of the string. +// +// To unmarshal JSON into a struct, Unmarshal matches incoming object +// keys to the keys used by Marshal (either the struct field name or its tag), +// preferring an exact match but also accepting a case-insensitive match. By +// default, object keys which don't have a corresponding struct field are +// ignored (see Decoder.DisallowUnknownFields for an alternative). +// +// To unmarshal JSON into an interface value, +// Unmarshal stores one of these in the interface value: +// +// bool, for JSON booleans +// float64, for JSON numbers +// string, for JSON strings +// []interface{}, for JSON arrays +// map[string]interface{}, for JSON objects +// nil for JSON null +// +// To unmarshal a JSON array into a slice, Unmarshal resets the slice length +// to zero and then appends each element to the slice. +// As a special case, to unmarshal an empty JSON array into a slice, +// Unmarshal replaces the slice with a new empty slice. +// +// To unmarshal a JSON array into a Go array, Unmarshal decodes +// JSON array elements into corresponding Go array elements. +// If the Go array is smaller than the JSON array, +// the additional JSON array elements are discarded. +// If the JSON array is smaller than the Go array, +// the additional Go array elements are set to zero values. +// +// To unmarshal a JSON object into a map, Unmarshal first establishes a map to +// use. If the map is nil, Unmarshal allocates a new map. Otherwise Unmarshal +// reuses the existing map, keeping existing entries. Unmarshal then stores +// key-value pairs from the JSON object into the map. The map's key type must +// either be any string type, an integer, implement json.Unmarshaler, or +// implement encoding.TextUnmarshaler. +// +// If the JSON-encoded data contain a syntax error, Unmarshal returns a SyntaxError. +// +// If a JSON value is not appropriate for a given target type, +// or if a JSON number overflows the target type, Unmarshal +// skips that field and completes the unmarshaling as best it can. +// If no more serious errors are encountered, Unmarshal returns +// an UnmarshalTypeError describing the earliest such error. In any +// case, it's not guaranteed that all the remaining fields following +// the problematic one will be unmarshaled into the target object. +// +// The JSON null value unmarshals into an interface, map, pointer, or slice +// by setting that Go value to nil. Because null is often used in JSON to mean +// “not present,” unmarshaling a JSON null into any other Go type has no effect +// on the value and produces no error. +// +// When unmarshaling quoted strings, invalid UTF-8 or +// invalid UTF-16 surrogate pairs are not treated as an error. +// Instead, they are replaced by the Unicode replacement +// character U+FFFD. +func Unmarshal(data []byte, v any) error { + // Check for well-formedness. + // Avoids filling out half a data structure + // before discovering a JSON syntax error. + d := ds.Get().(*decodeState) + defer ds.Put(d) + //var d decodeState + d.useNumber = true + err := checkValid(data, &d.scan) + if err != nil { + return err + } + + d.init(data) + return d.unmarshal(v) +} + +var ds = sync.Pool{ + New: func() any { + return new(decodeState) + }, +} + +func UnmarshalWithKeys(data []byte, v any) ([]string, error) { + // Check for well-formedness. + // Avoids filling out half a data structure + // before discovering a JSON syntax error. + + d := ds.Get().(*decodeState) + defer ds.Put(d) + //var d decodeState + d.useNumber = true + err := checkValid(data, &d.scan) + if err != nil { + return nil, err + } + + d.init(data) + err = d.unmarshal(v) + if err != nil { + return nil, err + } + + return d.lastKeys, nil +} + +func UnmarshalValid(data []byte, v any) error { + // Check for well-formedness. + // Avoids filling out half a data structure + // before discovering a JSON syntax error. + d := ds.Get().(*decodeState) + defer ds.Put(d) + //var d decodeState + d.useNumber = true + + d.init(data) + return d.unmarshal(v) +} + +func UnmarshalValidWithKeys(data []byte, v any) ([]string, error) { + // Check for well-formedness. + // Avoids filling out half a data structure + // before discovering a JSON syntax error. + + d := ds.Get().(*decodeState) + defer ds.Put(d) + //var d decodeState + d.useNumber = true + + d.init(data) + err := d.unmarshal(v) + if err != nil { + return nil, err + } + + return d.lastKeys, nil +} + +// Unmarshaler is the interface implemented by types +// that can unmarshal a JSON description of themselves. +// The input can be assumed to be a valid encoding of +// a JSON value. UnmarshalJSON must copy the JSON data +// if it wishes to retain the data after returning. +// +// By convention, to approximate the behavior of Unmarshal itself, +// Unmarshalers implement UnmarshalJSON([]byte("null")) as a no-op. +type Unmarshaler interface { + UnmarshalJSON([]byte) error +} + +// An UnmarshalTypeError describes a JSON value that was +// not appropriate for a value of a specific Go type. +type UnmarshalTypeError struct { + Value string // description of JSON value - "bool", "array", "number -5" + Type reflect.Type // type of Go value it could not be assigned to + Offset int64 // error occurred after reading Offset bytes + Struct string // name of the struct type containing the field + Field string // the full path from root node to the field +} + +func (e *UnmarshalTypeError) Error() string { + if e.Struct != "" || e.Field != "" { + return "json: cannot unmarshal " + e.Value + " into Go struct field " + e.Struct + "." + e.Field + " of type " + e.Type.String() + } + return "json: cannot unmarshal " + e.Value + " into Go value of type " + e.Type.String() +} + +// An UnmarshalFieldError describes a JSON object key that +// led to an unexported (and therefore unwritable) struct field. +// +// Deprecated: No longer used; kept for compatibility. +type UnmarshalFieldError struct { + Key string + Type reflect.Type + Field reflect.StructField +} + +func (e *UnmarshalFieldError) Error() string { + return "json: cannot unmarshal object key " + strconv.Quote(e.Key) + " into unexported field " + e.Field.Name + " of type " + e.Type.String() +} + +// An InvalidUnmarshalError describes an invalid argument passed to Unmarshal. +// (The argument to Unmarshal must be a non-nil pointer.) +type InvalidUnmarshalError struct { + Type reflect.Type +} + +func (e *InvalidUnmarshalError) Error() string { + if e.Type == nil { + return "json: Unmarshal(nil)" + } + + if e.Type.Kind() != reflect.Pointer { + return "json: Unmarshal(non-pointer " + e.Type.String() + ")" + } + return "json: Unmarshal(nil " + e.Type.String() + ")" +} + +func (d *decodeState) unmarshal(v any) error { + rv := reflect.ValueOf(v) + if rv.Kind() != reflect.Pointer || rv.IsNil() { + return &InvalidUnmarshalError{reflect.TypeOf(v)} + } + + d.scan.reset() + d.scanWhile(scanSkipSpace) + // We decode rv not rv.Elem because the Unmarshaler interface + // test must be applied at the top level of the value. + err := d.value(rv) + if err != nil { + return d.addErrorContext(err) + } + return d.savedError +} + +// A Number represents a JSON number literal. +type Number string + +// String returns the literal text of the number. +func (n Number) String() string { return string(n) } + +// Float64 returns the number as a float64. +func (n Number) Float64() (float64, error) { + return strconv.ParseFloat(string(n), 64) +} + +// Int64 returns the number as an int64. +func (n Number) Int64() (int64, error) { + return strconv.ParseInt(string(n), 10, 64) +} + +// An errorContext provides context for type errors during decoding. +type errorContext struct { + Struct reflect.Type + FieldStack []string +} + +// decodeState represents the state while decoding a JSON value. +type decodeState struct { + data []byte + off int // next read offset in data + opcode int // last read result + scan scanner + errorContext *errorContext + savedError error + useNumber bool + disallowUnknownFields bool + lastKeys []string +} + +// readIndex returns the position of the last byte read. +func (d *decodeState) readIndex() int { + return d.off - 1 +} + +// phasePanicMsg is used as a panic message when we end up with something that +// shouldn't happen. It can indicate a bug in the JSON decoder, or that +// something is editing the data slice while the decoder executes. +const phasePanicMsg = "JSON decoder out of sync - data changing underfoot?" + +func (d *decodeState) init(data []byte) *decodeState { + d.data = data + d.off = 0 + d.savedError = nil + if d.errorContext != nil { + d.errorContext.Struct = nil + // Reuse the allocated space for the FieldStack slice. + d.errorContext.FieldStack = d.errorContext.FieldStack[:0] + } + return d +} + +// saveError saves the first err it is called with, +// for reporting at the end of the unmarshal. +func (d *decodeState) saveError(err error) { + if d.savedError == nil { + d.savedError = d.addErrorContext(err) + } +} + +// addErrorContext returns a new error enhanced with information from d.errorContext +func (d *decodeState) addErrorContext(err error) error { + if d.errorContext != nil && (d.errorContext.Struct != nil || len(d.errorContext.FieldStack) > 0) { + switch err := err.(type) { + case *UnmarshalTypeError: + err.Struct = d.errorContext.Struct.Name() + err.Field = strings.Join(d.errorContext.FieldStack, ".") + } + } + return err +} + +// skip scans to the end of what was started. +func (d *decodeState) skip() { + s, data, i := &d.scan, d.data, d.off + depth := len(s.parseState) + for { + op := s.step(s, data[i]) + i++ + if len(s.parseState) < depth { + d.off = i + d.opcode = op + return + } + } +} + +// scanNext processes the byte at d.data[d.off]. +func (d *decodeState) scanNext() { + if d.off < len(d.data) { + d.opcode = d.scan.step(&d.scan, d.data[d.off]) + d.off++ + } else { + d.opcode = d.scan.eof() + d.off = len(d.data) + 1 // mark processed EOF with len+1 + } +} + +// scanWhile processes bytes in d.data[d.off:] until it +// receives a scan code not equal to op. +func (d *decodeState) scanWhile(op int) { + s, data, i := &d.scan, d.data, d.off + for i < len(data) { + newOp := s.step(s, data[i]) + i++ + if newOp != op { + d.opcode = newOp + d.off = i + return + } + } + + d.off = len(data) + 1 // mark processed EOF with len+1 + d.opcode = d.scan.eof() +} + +// rescanLiteral is similar to scanWhile(scanContinue), but it specialises the +// common case where we're decoding a literal. The decoder scans the input +// twice, once for syntax errors and to check the length of the value, and the +// second to perform the decoding. +// +// Only in the second step do we use decodeState to tokenize literals, so we +// know there aren't any syntax errors. We can take advantage of that knowledge, +// and scan a literal's bytes much more quickly. +func (d *decodeState) rescanLiteral() { + data, i := d.data, d.off +Switch: + switch data[i-1] { + case '"': // string + for ; i < len(data); i++ { + switch data[i] { + case '\\': + i++ // escaped char + case '"': + i++ // tokenize the closing quote too + break Switch + } + } + case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-': // number + for ; i < len(data); i++ { + switch data[i] { + case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', + '.', 'e', 'E', '+', '-': + default: + break Switch + } + } + case 't': // true + i += len("rue") + case 'f': // false + i += len("alse") + case 'n': // null + i += len("ull") + } + if i < len(data) { + d.opcode = stateEndValue(&d.scan, data[i]) + } else { + d.opcode = scanEnd + } + d.off = i + 1 +} + +// value consumes a JSON value from d.data[d.off-1:], decoding into v, and +// reads the following byte ahead. If v is invalid, the value is discarded. +// The first byte of the value has been read already. +func (d *decodeState) value(v reflect.Value) error { + switch d.opcode { + default: + panic(phasePanicMsg) + + case scanBeginArray: + if v.IsValid() { + if err := d.array(v); err != nil { + return err + } + } else { + d.skip() + } + d.scanNext() + + case scanBeginObject: + if v.IsValid() { + if err := d.object(v); err != nil { + return err + } + } else { + d.skip() + } + d.scanNext() + + case scanBeginLiteral: + // All bytes inside literal return scanContinue op code. + start := d.readIndex() + d.rescanLiteral() + + if v.IsValid() { + if err := d.literalStore(d.data[start:d.readIndex()], v, false); err != nil { + return err + } + } + } + return nil +} + +type unquotedValue struct{} + +// valueQuoted is like value but decodes a +// quoted string literal or literal null into an interface value. +// If it finds anything other than a quoted string literal or null, +// valueQuoted returns unquotedValue{}. +func (d *decodeState) valueQuoted() any { + switch d.opcode { + default: + panic(phasePanicMsg) + + case scanBeginArray, scanBeginObject: + d.skip() + d.scanNext() + + case scanBeginLiteral: + v := d.literalInterface() + switch v.(type) { + case nil, string: + return v + } + } + return unquotedValue{} +} + +// indirect walks down v allocating pointers as needed, +// until it gets to a non-pointer. +// If it encounters an Unmarshaler, indirect stops and returns that. +// If decodingNull is true, indirect stops at the first settable pointer so it +// can be set to nil. +func indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnmarshaler, reflect.Value) { + // Issue #24153 indicates that it is generally not a guaranteed property + // that you may round-trip a reflect.Value by calling Value.Addr().Elem() + // and expect the value to still be settable for values derived from + // unexported embedded struct fields. + // + // The logic below effectively does this when it first addresses the value + // (to satisfy possible pointer methods) and continues to dereference + // subsequent pointers as necessary. + // + // After the first round-trip, we set v back to the original value to + // preserve the original RW flags contained in reflect.Value. + v0 := v + haveAddr := false + + // If v is a named type and is addressable, + // start with its address, so that if the type has pointer methods, + // we find them. + if v.Kind() != reflect.Pointer && v.Type().Name() != "" && v.CanAddr() { + haveAddr = true + v = v.Addr() + } + for { + // Load value from interface, but only if the result will be + // usefully addressable. + if v.Kind() == reflect.Interface && !v.IsNil() { + e := v.Elem() + if e.Kind() == reflect.Pointer && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Pointer) { + haveAddr = false + v = e + continue + } + } + + if v.Kind() != reflect.Pointer { + break + } + + if decodingNull && v.CanSet() { + break + } + + // Prevent infinite loop if v is an interface pointing to its own address: + // var v interface{} + // v = &v + if v.Elem().Kind() == reflect.Interface && v.Elem().Elem() == v { + v = v.Elem() + break + } + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + if v.Type().NumMethod() > 0 && v.CanInterface() { + if u, ok := v.Interface().(Unmarshaler); ok { + return u, nil, reflect.Value{} + } + if !decodingNull { + if u, ok := v.Interface().(encoding.TextUnmarshaler); ok { + return nil, u, reflect.Value{} + } + } + } + + if haveAddr { + v = v0 // restore original value after round-trip Value.Addr().Elem() + haveAddr = false + } else { + v = v.Elem() + } + } + return nil, nil, v +} + +// array consumes an array from d.data[d.off-1:], decoding into v. +// The first byte of the array ('[') has been read already. +func (d *decodeState) array(v reflect.Value) error { + // Check for unmarshaler. + u, ut, pv := indirect(v, false) + if u != nil { + start := d.readIndex() + d.skip() + return u.UnmarshalJSON(d.data[start:d.off]) + } + if ut != nil { + d.saveError(&UnmarshalTypeError{Value: "array", Type: v.Type(), Offset: int64(d.off)}) + d.skip() + return nil + } + v = pv + + // Check type of target. + switch v.Kind() { + case reflect.Interface: + if v.NumMethod() == 0 { + // Decoding into nil interface? Switch to non-reflect code. + ai := d.arrayInterface() + v.Set(reflect.ValueOf(ai)) + return nil + } + // Otherwise it's invalid. + fallthrough + default: + d.saveError(&UnmarshalTypeError{Value: "array", Type: v.Type(), Offset: int64(d.off)}) + d.skip() + return nil + case reflect.Array, reflect.Slice: + break + } + + i := 0 + for { + // Look ahead for ] - can only happen on first iteration. + d.scanWhile(scanSkipSpace) + if d.opcode == scanEndArray { + break + } + + // Get element of array, growing if necessary. + if v.Kind() == reflect.Slice { + // Grow slice if necessary + if i >= v.Cap() { + newcap := v.Cap() + v.Cap()/2 + if newcap < 4 { + newcap = 4 + } + newv := reflect.MakeSlice(v.Type(), v.Len(), newcap) + reflect.Copy(newv, v) + v.Set(newv) + } + if i >= v.Len() { + v.SetLen(i + 1) + } + } + + if i < v.Len() { + // Decode into element. + if err := d.value(v.Index(i)); err != nil { + return err + } + } else { + // Ran out of fixed array: skip. + if err := d.value(reflect.Value{}); err != nil { + return err + } + } + i++ + + // Next token must be , or ]. + if d.opcode == scanSkipSpace { + d.scanWhile(scanSkipSpace) + } + if d.opcode == scanEndArray { + break + } + if d.opcode != scanArrayValue { + panic(phasePanicMsg) + } + } + + if i < v.Len() { + if v.Kind() == reflect.Array { + // Array. Zero the rest. + z := reflect.Zero(v.Type().Elem()) + for ; i < v.Len(); i++ { + v.Index(i).Set(z) + } + } else { + v.SetLen(i) + } + } + if i == 0 && v.Kind() == reflect.Slice { + v.Set(reflect.MakeSlice(v.Type(), 0, 0)) + } + return nil +} + +var nullLiteral = []byte("null") +var textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() + +// object consumes an object from d.data[d.off-1:], decoding into v. +// The first byte ('{') of the object has been read already. +func (d *decodeState) object(v reflect.Value) error { + // Check for unmarshaler. + u, ut, pv := indirect(v, false) + if u != nil { + start := d.readIndex() + d.skip() + return u.UnmarshalJSON(d.data[start:d.off]) + } + if ut != nil { + d.saveError(&UnmarshalTypeError{Value: "object", Type: v.Type(), Offset: int64(d.off)}) + d.skip() + return nil + } + v = pv + t := v.Type() + + // Decoding into nil interface? Switch to non-reflect code. + if v.Kind() == reflect.Interface && v.NumMethod() == 0 { + oi := d.objectInterface() + v.Set(reflect.ValueOf(oi)) + return nil + } + + var fields structFields + + // Check type of target: + // struct or + // map[T1]T2 where T1 is string, an integer type, + // or an encoding.TextUnmarshaler + switch v.Kind() { + case reflect.Map: + // Map key must either have string kind, have an integer kind, + // or be an encoding.TextUnmarshaler. + switch t.Key().Kind() { + case reflect.String, + reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, + reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + default: + if !reflect.PointerTo(t.Key()).Implements(textUnmarshalerType) { + d.saveError(&UnmarshalTypeError{Value: "object", Type: t, Offset: int64(d.off)}) + d.skip() + return nil + } + } + if v.IsNil() { + v.Set(reflect.MakeMap(t)) + } + case reflect.Struct: + fields = cachedTypeFields(t) + // ok + default: + d.saveError(&UnmarshalTypeError{Value: "object", Type: t, Offset: int64(d.off)}) + d.skip() + return nil + } + + var mapElem reflect.Value + var origErrorContext errorContext + if d.errorContext != nil { + origErrorContext = *d.errorContext + } + + var keys []string + + for { + // Read opening " of string key or closing }. + d.scanWhile(scanSkipSpace) + if d.opcode == scanEndObject { + // closing } - can only happen on first iteration. + break + } + if d.opcode != scanBeginLiteral { + panic(phasePanicMsg) + } + + // Read key. + start := d.readIndex() + d.rescanLiteral() + item := d.data[start:d.readIndex()] + key, ok := unquoteBytes(item) + if !ok { + panic(phasePanicMsg) + } + + keys = append(keys, string(key)) + + // Figure out field corresponding to key. + var subv reflect.Value + destring := false // whether the value is wrapped in a string to be decoded first + + if v.Kind() == reflect.Map { + elemType := t.Elem() + if !mapElem.IsValid() { + mapElem = reflect.New(elemType).Elem() + } else { + mapElem.Set(reflect.Zero(elemType)) + } + subv = mapElem + } else { + var f *field + if i, ok := fields.nameIndex[string(key)]; ok { + // Found an exact name match. + f = &fields.list[i] + } else { + // Fall back to the expensive case-insensitive + // linear search. + for i := range fields.list { + ff := &fields.list[i] + if ff.equalFold(ff.nameBytes, key) { + f = ff + break + } + } + } + if f != nil { + subv = v + destring = f.quoted + for _, i := range f.index { + if subv.Kind() == reflect.Pointer { + if subv.IsNil() { + // If a struct embeds a pointer to an unexported type, + // it is not possible to set a newly allocated value + // since the field is unexported. + // + // See https://golang.org/issue/21357 + if !subv.CanSet() { + d.saveError(fmt.Errorf("json: cannot set embedded pointer to unexported struct: %v", subv.Type().Elem())) + // Invalidate subv to ensure d.value(subv) skips over + // the JSON value without assigning it to subv. + subv = reflect.Value{} + destring = false + break + } + subv.Set(reflect.New(subv.Type().Elem())) + } + subv = subv.Elem() + } + subv = subv.Field(i) + } + if d.errorContext == nil { + d.errorContext = new(errorContext) + } + d.errorContext.FieldStack = append(d.errorContext.FieldStack, f.name) + d.errorContext.Struct = t + } else if d.disallowUnknownFields { + d.saveError(fmt.Errorf("json: unknown field %q", key)) + } + } + + // Read : before value. + if d.opcode == scanSkipSpace { + d.scanWhile(scanSkipSpace) + } + if d.opcode != scanObjectKey { + panic(phasePanicMsg) + } + d.scanWhile(scanSkipSpace) + + if destring { + switch qv := d.valueQuoted().(type) { + case nil: + if err := d.literalStore(nullLiteral, subv, false); err != nil { + return err + } + case string: + if err := d.literalStore([]byte(qv), subv, true); err != nil { + return err + } + default: + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal unquoted value into %v", subv.Type())) + } + } else { + if err := d.value(subv); err != nil { + return err + } + } + + // Write value back to map; + // if using struct, subv points into struct already. + if v.Kind() == reflect.Map { + kt := t.Key() + var kv reflect.Value + switch { + case reflect.PointerTo(kt).Implements(textUnmarshalerType): + kv = reflect.New(kt) + if err := d.literalStore(item, kv, true); err != nil { + return err + } + kv = kv.Elem() + case kt.Kind() == reflect.String: + kv = reflect.ValueOf(key).Convert(kt) + default: + switch kt.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + s := string(key) + n, err := strconv.ParseInt(s, 10, 64) + if err != nil || reflect.Zero(kt).OverflowInt(n) { + d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: kt, Offset: int64(start + 1)}) + break + } + kv = reflect.ValueOf(n).Convert(kt) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + s := string(key) + n, err := strconv.ParseUint(s, 10, 64) + if err != nil || reflect.Zero(kt).OverflowUint(n) { + d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: kt, Offset: int64(start + 1)}) + break + } + kv = reflect.ValueOf(n).Convert(kt) + default: + panic("json: Unexpected key type") // should never occur + } + } + if kv.IsValid() { + v.SetMapIndex(kv, subv) + } + } + + // Next token must be , or }. + if d.opcode == scanSkipSpace { + d.scanWhile(scanSkipSpace) + } + if d.errorContext != nil { + // Reset errorContext to its original state. + // Keep the same underlying array for FieldStack, to reuse the + // space and avoid unnecessary allocs. + d.errorContext.FieldStack = d.errorContext.FieldStack[:len(origErrorContext.FieldStack)] + d.errorContext.Struct = origErrorContext.Struct + } + if d.opcode == scanEndObject { + break + } + if d.opcode != scanObjectValue { + panic(phasePanicMsg) + } + } + + if v.Kind() == reflect.Map { + d.lastKeys = keys + } + return nil +} + +// convertNumber converts the number literal s to a float64 or a Number +// depending on the setting of d.useNumber. +func (d *decodeState) convertNumber(s string) (any, error) { + if d.useNumber { + return Number(s), nil + } + f, err := strconv.ParseFloat(s, 64) + if err != nil { + return nil, &UnmarshalTypeError{Value: "number " + s, Type: reflect.TypeOf(0.0), Offset: int64(d.off)} + } + return f, nil +} + +var numberType = reflect.TypeOf(Number("")) + +// literalStore decodes a literal stored in item into v. +// +// fromQuoted indicates whether this literal came from unwrapping a +// string from the ",string" struct tag option. this is used only to +// produce more helpful error messages. +func (d *decodeState) literalStore(item []byte, v reflect.Value, fromQuoted bool) error { + // Check for unmarshaler. + if len(item) == 0 { + //Empty string given + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + return nil + } + isNull := item[0] == 'n' // null + u, ut, pv := indirect(v, isNull) + if u != nil { + return u.UnmarshalJSON(item) + } + if ut != nil { + if item[0] != '"' { + if fromQuoted { + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + return nil + } + val := "number" + switch item[0] { + case 'n': + val = "null" + case 't', 'f': + val = "bool" + } + d.saveError(&UnmarshalTypeError{Value: val, Type: v.Type(), Offset: int64(d.readIndex())}) + return nil + } + s, ok := unquoteBytes(item) + if !ok { + if fromQuoted { + return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()) + } + panic(phasePanicMsg) + } + return ut.UnmarshalText(s) + } + + v = pv + + switch c := item[0]; c { + case 'n': // null + // The main parser checks that only true and false can reach here, + // but if this was a quoted string input, it could be anything. + if fromQuoted && string(item) != "null" { + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + break + } + switch v.Kind() { + case reflect.Interface, reflect.Pointer, reflect.Map, reflect.Slice: + v.Set(reflect.Zero(v.Type())) + // otherwise, ignore null for primitives/string + } + case 't', 'f': // true, false + value := item[0] == 't' + // The main parser checks that only true and false can reach here, + // but if this was a quoted string input, it could be anything. + if fromQuoted && string(item) != "true" && string(item) != "false" { + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + break + } + switch v.Kind() { + default: + if fromQuoted { + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.saveError(&UnmarshalTypeError{Value: "bool", Type: v.Type(), Offset: int64(d.readIndex())}) + } + case reflect.Bool: + v.SetBool(value) + case reflect.Interface: + if v.NumMethod() == 0 { + v.Set(reflect.ValueOf(value)) + } else { + d.saveError(&UnmarshalTypeError{Value: "bool", Type: v.Type(), Offset: int64(d.readIndex())}) + } + } + + case '"': // string + s, ok := unquoteBytes(item) + if !ok { + if fromQuoted { + return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()) + } + panic(phasePanicMsg) + } + switch v.Kind() { + default: + d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type(), Offset: int64(d.readIndex())}) + case reflect.Slice: + if v.Type().Elem().Kind() != reflect.Uint8 { + d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type(), Offset: int64(d.readIndex())}) + break + } + b := make([]byte, base64.StdEncoding.DecodedLen(len(s))) + n, err := base64.StdEncoding.Decode(b, s) + if err != nil { + d.saveError(err) + break + } + v.SetBytes(b[:n]) + case reflect.String: + if v.Type() == numberType && !isValidNumber(string(s)) { + return fmt.Errorf("json: invalid number literal, trying to unmarshal %q into Number", item) + } + v.SetString(string(s)) + case reflect.Interface: + if v.NumMethod() == 0 { + v.Set(reflect.ValueOf(string(s))) + } else { + d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type(), Offset: int64(d.readIndex())}) + } + } + + default: // number + if c != '-' && (c < '0' || c > '9') { + if fromQuoted { + return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()) + } + panic(phasePanicMsg) + } + s := string(item) + switch v.Kind() { + default: + if v.Kind() == reflect.String && v.Type() == numberType { + // s must be a valid number, because it's + // already been tokenized. + v.SetString(s) + break + } + if fromQuoted { + return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()) + } + d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type(), Offset: int64(d.readIndex())}) + case reflect.Interface: + n, err := d.convertNumber(s) + if err != nil { + d.saveError(err) + break + } + if v.NumMethod() != 0 { + d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type(), Offset: int64(d.readIndex())}) + break + } + v.Set(reflect.ValueOf(n)) + + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + n, err := strconv.ParseInt(s, 10, 64) + if err != nil || v.OverflowInt(n) { + d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: v.Type(), Offset: int64(d.readIndex())}) + break + } + v.SetInt(n) + + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + n, err := strconv.ParseUint(s, 10, 64) + if err != nil || v.OverflowUint(n) { + d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: v.Type(), Offset: int64(d.readIndex())}) + break + } + v.SetUint(n) + + case reflect.Float32, reflect.Float64: + n, err := strconv.ParseFloat(s, v.Type().Bits()) + if err != nil || v.OverflowFloat(n) { + d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: v.Type(), Offset: int64(d.readIndex())}) + break + } + v.SetFloat(n) + } + } + return nil +} + +// The xxxInterface routines build up a value to be stored +// in an empty interface. They are not strictly necessary, +// but they avoid the weight of reflection in this common case. + +// valueInterface is like value but returns interface{} +func (d *decodeState) valueInterface() (val any) { + switch d.opcode { + default: + panic(phasePanicMsg) + case scanBeginArray: + val = d.arrayInterface() + d.scanNext() + case scanBeginObject: + val = d.objectInterface() + d.scanNext() + case scanBeginLiteral: + val = d.literalInterface() + } + return +} + +// arrayInterface is like array but returns []interface{}. +func (d *decodeState) arrayInterface() []any { + var v = make([]any, 0) + for { + // Look ahead for ] - can only happen on first iteration. + d.scanWhile(scanSkipSpace) + if d.opcode == scanEndArray { + break + } + + v = append(v, d.valueInterface()) + + // Next token must be , or ]. + if d.opcode == scanSkipSpace { + d.scanWhile(scanSkipSpace) + } + if d.opcode == scanEndArray { + break + } + if d.opcode != scanArrayValue { + panic(phasePanicMsg) + } + } + return v +} + +// objectInterface is like object but returns map[string]interface{}. +func (d *decodeState) objectInterface() map[string]any { + m := make(map[string]any) + for { + // Read opening " of string key or closing }. + d.scanWhile(scanSkipSpace) + if d.opcode == scanEndObject { + // closing } - can only happen on first iteration. + break + } + if d.opcode != scanBeginLiteral { + panic(phasePanicMsg) + } + + // Read string key. + start := d.readIndex() + d.rescanLiteral() + item := d.data[start:d.readIndex()] + key, ok := unquote(item) + if !ok { + panic(phasePanicMsg) + } + + // Read : before value. + if d.opcode == scanSkipSpace { + d.scanWhile(scanSkipSpace) + } + if d.opcode != scanObjectKey { + panic(phasePanicMsg) + } + d.scanWhile(scanSkipSpace) + + // Read value. + m[key] = d.valueInterface() + + // Next token must be , or }. + if d.opcode == scanSkipSpace { + d.scanWhile(scanSkipSpace) + } + if d.opcode == scanEndObject { + break + } + if d.opcode != scanObjectValue { + panic(phasePanicMsg) + } + } + return m +} + +// literalInterface consumes and returns a literal from d.data[d.off-1:] and +// it reads the following byte ahead. The first byte of the literal has been +// read already (that's how the caller knows it's a literal). +func (d *decodeState) literalInterface() any { + // All bytes inside literal return scanContinue op code. + start := d.readIndex() + d.rescanLiteral() + + item := d.data[start:d.readIndex()] + + switch c := item[0]; c { + case 'n': // null + return nil + + case 't', 'f': // true, false + return c == 't' + + case '"': // string + s, ok := unquote(item) + if !ok { + panic(phasePanicMsg) + } + return s + + default: // number + if c != '-' && (c < '0' || c > '9') { + panic(phasePanicMsg) + } + n, err := d.convertNumber(string(item)) + if err != nil { + d.saveError(err) + } + return n + } +} + +// getu4 decodes \uXXXX from the beginning of s, returning the hex value, +// or it returns -1. +func getu4(s []byte) rune { + if len(s) < 6 || s[0] != '\\' || s[1] != 'u' { + return -1 + } + var r rune + for _, c := range s[2:6] { + switch { + case '0' <= c && c <= '9': + c = c - '0' + case 'a' <= c && c <= 'f': + c = c - 'a' + 10 + case 'A' <= c && c <= 'F': + c = c - 'A' + 10 + default: + return -1 + } + r = r*16 + rune(c) + } + return r +} + +// unquote converts a quoted JSON string literal s into an actual string t. +// The rules are different than for Go, so cannot use strconv.Unquote. +func unquote(s []byte) (t string, ok bool) { + s, ok = unquoteBytes(s) + t = string(s) + return +} + +func unquoteBytes(s []byte) (t []byte, ok bool) { + if len(s) < 2 || s[0] != '"' || s[len(s)-1] != '"' { + return + } + s = s[1 : len(s)-1] + + // Check for unusual characters. If there are none, + // then no unquoting is needed, so return a slice of the + // original bytes. + r := 0 + for r < len(s) { + c := s[r] + if c == '\\' || c == '"' || c < ' ' { + break + } + if c < utf8.RuneSelf { + r++ + continue + } + rr, size := utf8.DecodeRune(s[r:]) + if rr == utf8.RuneError && size == 1 { + break + } + r += size + } + if r == len(s) { + return s, true + } + + b := make([]byte, len(s)+2*utf8.UTFMax) + w := copy(b, s[0:r]) + for r < len(s) { + // Out of room? Can only happen if s is full of + // malformed UTF-8 and we're replacing each + // byte with RuneError. + if w >= len(b)-2*utf8.UTFMax { + nb := make([]byte, (len(b)+utf8.UTFMax)*2) + copy(nb, b[0:w]) + b = nb + } + switch c := s[r]; { + case c == '\\': + r++ + if r >= len(s) { + return + } + switch s[r] { + default: + return + case '"', '\\', '/', '\'': + b[w] = s[r] + r++ + w++ + case 'b': + b[w] = '\b' + r++ + w++ + case 'f': + b[w] = '\f' + r++ + w++ + case 'n': + b[w] = '\n' + r++ + w++ + case 'r': + b[w] = '\r' + r++ + w++ + case 't': + b[w] = '\t' + r++ + w++ + case 'u': + r-- + rr := getu4(s[r:]) + if rr < 0 { + return + } + r += 6 + if utf16.IsSurrogate(rr) { + rr1 := getu4(s[r:]) + if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar { + // A valid pair; consume. + r += 6 + w += utf8.EncodeRune(b[w:], dec) + break + } + // Invalid surrogate; fall back to replacement rune. + rr = unicode.ReplacementChar + } + w += utf8.EncodeRune(b[w:], rr) + } + + // Quote, control characters are invalid. + case c == '"', c < ' ': + return + + // ASCII + case c < utf8.RuneSelf: + b[w] = c + r++ + w++ + + // Coerce to well-formed UTF-8. + default: + rr, size := utf8.DecodeRune(s[r:]) + r += size + w += utf8.EncodeRune(b[w:], rr) + } + } + return b[0:w], true +} diff --git a/vendor/github.com/evanphx/json-patch/v5/internal/json/encode.go b/vendor/github.com/evanphx/json-patch/v5/internal/json/encode.go new file mode 100644 index 00000000..a1819b16 --- /dev/null +++ b/vendor/github.com/evanphx/json-patch/v5/internal/json/encode.go @@ -0,0 +1,1473 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package json implements encoding and decoding of JSON as defined in +// RFC 7159. The mapping between JSON and Go values is described +// in the documentation for the Marshal and Unmarshal functions. +// +// See "JSON and Go" for an introduction to this package: +// https://golang.org/doc/articles/json_and_go.html +package json + +import ( + "bytes" + "encoding" + "encoding/base64" + "fmt" + "math" + "reflect" + "sort" + "strconv" + "strings" + "sync" + "unicode" + "unicode/utf8" +) + +// Marshal returns the JSON encoding of v. +// +// Marshal traverses the value v recursively. +// If an encountered value implements the Marshaler interface +// and is not a nil pointer, Marshal calls its MarshalJSON method +// to produce JSON. If no MarshalJSON method is present but the +// value implements encoding.TextMarshaler instead, Marshal calls +// its MarshalText method and encodes the result as a JSON string. +// The nil pointer exception is not strictly necessary +// but mimics a similar, necessary exception in the behavior of +// UnmarshalJSON. +// +// Otherwise, Marshal uses the following type-dependent default encodings: +// +// Boolean values encode as JSON booleans. +// +// Floating point, integer, and Number values encode as JSON numbers. +// +// String values encode as JSON strings coerced to valid UTF-8, +// replacing invalid bytes with the Unicode replacement rune. +// So that the JSON will be safe to embed inside HTML