Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameterize ObjectSelector for VPA webhook #6558

Closed
jackjii79 opened this issue Feb 23, 2024 · 8 comments
Closed

Parameterize ObjectSelector for VPA webhook #6558

jackjii79 opened this issue Feb 23, 2024 · 8 comments
Labels
area/vertical-pod-autoscaler kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jackjii79
Copy link

Which component are you using?:

vertical-pod-autoscaler - admission-controller

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

The MutatingAdmissionWebhook configuration supports ObjectSelector to select what resources should be sent to the webhook endpoint. This definition allow users to have a finer control over which resources should be selected.

What is missing is the ability to specify the selectors in admission controller when creating the MutatingWebhookConfiguration. There does not appear to be a way to specify what pods should be selected by the VPA webhook.

In practice, the scope of resources intercepted by mutating webhook should be limited to the minimal necessary set of resources, currently all pod creation requests will be intercepted.

Ideally a cluster admin could configure the admission control to ensure webhook only select certain resources. The only option right now is all pods get created expected to be in the cluster and admission control will find the pods where there exists matching VPA.

Describe the solution you'd like.:

Expose new parameters to --object-selector="{ 'matchExpressions': [ { 'key': 'runlevel', 'operator': 'NotIn', 'values': [ '0', '1' ] } ] }"

Describe any alternative solutions you've considered.:

--namespace-selector could serve similar purpose on namespace level.

Additional context.:
There has one similar request #6232

@jackjii79 jackjii79 added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 23, 2024
@voelzmo
Copy link
Contributor

voelzmo commented Feb 26, 2024

Hey @jackjii79 thanks for the feature request! As you correctly identified, there's a similar request open already, which has a bit more details on the desired use-case.
You you help me understand what it is that you're trying to achieve with this feature? Could you maybe provide an example how you'd configure the objectSelector on the Webhook? Which concerns are you having with the current approach of not specifying custom selectors, is this efficiency/performance related or coming from a different point-of-view?

Thanks!

@jackjii79
Copy link
Author

Hey @jackjii79 thanks for the feature request! As you correctly identified, there's a similar request open already, which has a bit more details on the desired use-case. You you help me understand what it is that you're trying to achieve with this feature? Could you maybe provide an example how you'd configure the objectSelector on the Webhook? Which concerns are you having with the current approach of not specifying custom selectors, is this efficiency/performance related or coming from a different point-of-view?

Thanks!

Thanks @voelzmo to your follow up. I want this feature because since admission controller support self-registry mutatingwebhookconfiguration, then it should allow to intercept resource request on finer degree because our internal cloud has a very strict regulation where we want to minimize the potential impact of VPA on cluster level.

A concrete use case is we have a centralized provisioning service which will be responsible for creating/updating/deleting on-demand workload resource requests (statefulset, deployments, etc) to do certain tasks and all created resources are limited onto a single namespace and we want to utilize VPA to optimize operational cost, all created resources comes with a common set of labels in the same namespace which i want to customize the webhook to only intercept given set of resources in admission controller.
For instance,

objectSelector:
  matchExpressions:
  -  key: app.kubernetes.io/component
     operator: In
     values:
     - componentName
     - ...
   - key: app.kubernetes.io/part-of
     operator: In
     values:
     - componentName
     - ... 

VPA is only gonna be used for ad-hoc resources managed by provisioning service so we want to be able to control what resource requests will be intercepted by the webhook service.
As you mentioned about efficiency/performance consideration, yes i would think this is also a valid reason to do so as we do not want to this impact to other resources in any way including performance/response time, etc.

The current workaround is to disallow self registry but since admission control do support creating mutation webhook then it makes this feature relevant.

FYI, there is another requests i created which outlines a higher level motivation #6568

@voelzmo
Copy link
Contributor

voelzmo commented Mar 19, 2024

The discussion in #6232 is about better controlling the list of namespaces that the VPA should not operate in.
The main point you're raising in #6568 seems focused on requiring only the necessary roles, when the list of namespaces is smaller than "all namespaces", correct?

Given the context of previous discussions we had about exposing a labelSelector on the webhook in #5660, I'm inclined to say that we don't want to open the objectSelector for the webhook and rather try to improve the namespace allowlist/denylist handling for VPA, does this make sense? Would this solve your current issues with VPA?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2024
@adrianmoisey
Copy link
Member

/area vertical-pod-autoscaler

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 6, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants