Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Available CRDs check feature #253

Merged
merged 2 commits into from
Oct 8, 2024

Conversation

rewantsoni
Copy link
Member

@rewantsoni rewantsoni commented Oct 7, 2024

Changes

  • Update the storageClaim controller to react whenever a create/delete event has been enqueued for a specific CRD that's become newly available.
  • Exit the ocs-client-operator process with a specific exit code whenever a new crd of interest is being created or deleted
  • Catch the exit code with a bash script and restart the ocs-client-operator process
  • Conditionally watch the DRClusterConfig CRD

In the future we will need to watch Maintenance Mode CRD by ramen as well

This PR just adds the base for watching the DRClusterConfig CRD by ramen, the actual creation will be added in #177

Ref: red-hat-storage/ocs-operator#2745

@raaizik
Copy link
Contributor

raaizik commented Oct 7, 2024

/lgtm

Copy link
Contributor

@leelavg leelavg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor comment rest is same as the one implemented in ocs-operator, isn't it? btw, are you planning on sending another PR for using this mechanism for storagecluster cr?

asking because, I'm going to remove specific code in a couple of days (when we get RC).

internal/controller/storageclaim_controller.go Outdated Show resolved Hide resolved
Reasons for this enhancement:
- A controller cannot set up a watch for a CRD that is not installed on
 the cluster, trying to set up a watch will panic the operator
- There is no known way, that we are aware of, to add a watch later
 without client cache issue

How does the enhancement work around the issue:
- On start of the operator(main), detect which CRDs are avail
- At the start each reconcile of controller, we fetch the CRD
 of interest and compare it with CRDs fetched in previous step,
  If there is any change, we panic the op

Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: raaizik <[email protected]>
Adds a script that bypasses pod restarts

Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: raaizik <[email protected]>
@rewantsoni
Copy link
Member Author

minor comment rest is same as the one implemented in ocs-operator, isn't it? btw, are you planning on sending another PR for using this mechanism for storagecluster cr?
asking because, I'm going to remove specific code in a couple of days (when we get RC).

No, I don't plan to send a PR to use this for storageCluster CR

Copy link

openshift-ci bot commented Oct 8, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: leelavg, raaizik, rewantsoni

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved label Oct 8, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit 7e5e1c9 into red-hat-storage:main Oct 8, 2024
11 of 12 checks passed
rchikatw pushed a commit to rchikatw/ocs-client-operator that referenced this pull request Oct 16, 2024
Available CRDs check feature

Signed-off-by: rchikatw <[email protected]>
rchikatw pushed a commit to rchikatw/ocs-client-operator that referenced this pull request Oct 16, 2024
Available CRDs check feature

Signed-off-by: rchikatw <[email protected]>
rchikatw pushed a commit to rchikatw/ocs-client-operator that referenced this pull request Oct 16, 2024
Available CRDs check feature

Signed-off-by: rchikatw <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants