Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add default TSCs if not present to ensure even distribution of OSDs #2913

Merged
merged 1 commit into from
Dec 6, 2024

Conversation

malayparida2000
Copy link
Contributor

When parts of the placement spec, such as tolerations or node affinity, are defined, the ocs-operator stops applying default placement specs, including TSCs. The osd distribution may become potentially uneven in absence of the TSC. Always adding default TSCs ensures consistent and balanced OSD placement across nodes.

@malayparida2000
Copy link
Contributor Author

@malayparida2000
Copy link
Contributor Author

/cc @travisn @iamniting

@openshift-ci openshift-ci bot requested review from iamniting and travisn December 2, 2024 09:12
@malayparida2000
Copy link
Contributor Author

/hold for testing

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 2, 2024
When parts of the placement spec, such as tolerations or node affinity,
are defined, the ocs-operator stops applying default placement specs,
including TSCs. The osd distribution may become potentially uneven in
absence of the TSC. Always adding default TSCs ensures consistent and
balanced OSD placement across nodes.

Signed-off-by: Malay Kumar Parida <[email protected]>
@malayparida2000
Copy link
Contributor Author

Test storagecluster, One deviceset with placement, another deviceset without placement.

cat <<EOF | oc create -f -
apiVersion: [ocs.openshift.io/v1](http://ocs.openshift.io/v1)
kind: StorageCluster
metadata:
  ownerReferences:
  - apiVersion: v1
    kind: StorageSystem
    name: ocs-storagecluster-storagesystem
    uid: ocs-storagecluster-storagesystem
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  storageDeviceSets:
    - name: without-placement
      dataPVCTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: "1Ti"
          storageClassName: gp3-csi
          volumeMode: Block
      count: 1
      replica: 3
      portable: true
    - name: with-placement
      dataPVCTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: "1Ti"
          storageClassName: gp3-csi
          volumeMode: Block
      count: 1
      replica: 3
      portable: true
      placement:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                - us-east-1
EOF

Without FIx-Deviceset placement on cephCluster CR

~ $ oc get cephcluster ocs-storagecluster-cephcluster -o yaml | yq '.spec.storage.storageClassDeviceSets[] | {"name": .name, "placement": .placement, "preparePlacement": .preparePlacement}' -P | sed 's/^/  /' | sed 's/^  name/- name/' 
- name: without-placement-0
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: without-placement-1
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: without-placement-2
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: with-placement-0
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
- name: with-placement-1
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
- name: with-placement-2
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1

With Fix-DeviceSet Placement on CephCluster CR

~ $ oc get cephcluster ocs-storagecluster-cephcluster -o yaml | yq '.spec.storage.storageClassDeviceSets[] | {"name": .name, "placement": .placement, "preparePlacement": .preparePlacement}' -P | sed 's/^/  /' | sed 's/^  name/- name/'
- name: without-placement-0
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: without-placement-1
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: without-placement-2
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [cluster.ocs.openshift.io/openshift-storage](http://cluster.ocs.openshift.io/openshift-storage)
                operator: Exists
    tolerations:
      - effect: NoSchedule
        key: [node.ocs.openshift.io/storage](http://node.ocs.openshift.io/storage)
        operator: Equal
        value: "true"
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: with-placement-0
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: with-placement-1
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
                - rook-ceph-osd-prepare
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
- name: with-placement-2
  placement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [topology.rook.io/rack](http://topology.rook.io/rack)
        whenUnsatisfiable: DoNotSchedule
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - rook-ceph-osd
        maxSkew: 1
        topologyKey: [kubernetes.io/hostname](http://kubernetes.io/hostname)
        whenUnsatisfiable: ScheduleAnyway
  preparePlacement:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: [topology.kubernetes.io/region](http://topology.kubernetes.io/region)
                operator: In
                values:
                  - us-east-1
    topologySpreadConstraints:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In

@malayparida2000
Copy link
Contributor Author

As seen in testing TSCs are getting applied as expected, Removing hold
/unhold

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 2, 2024
@malayparida2000
Copy link
Contributor Author

/cherry-pick release-4.18

@openshift-cherrypick-robot

@malayparida2000: once the present PR merges, I will cherry-pick it on top of release-4.18 in a new PR and assign it to you.

In response to this:

/cherry-pick release-4.18

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 6, 2024
Copy link
Contributor

openshift-ci bot commented Dec 6, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: iamniting, malayparida2000

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 6, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit 1694063 into red-hat-storage:main Dec 6, 2024
11 checks passed
@openshift-cherrypick-robot

@malayparida2000: new pull request created: #2927

In response to this:

/cherry-pick release-4.18

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants