-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CA FAQ: clarify the point about scheduling constraints blocking scale-down #6567
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: towca The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -96,8 +96,13 @@ Cluster Autoscaler decreases the size of the cluster when some nodes are consist | |||
"cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes": "volume-1,volume-2,.." | |||
``` | |||
and all of the pod's local volumes are listed in the annotation value. | |||
* Pods that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity, | |||
matching anti-affinity, etc) | |||
* Pods that cannot be moved elsewhere due to scheduling constraints. CA simulates kube-scheduler behavior, and if there's no other node where a given pod can schedule, the pod's node won't be scaled down. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Pods that cannot be moved elsewhere due to scheduling constraints. CA simulates kube-scheduler behavior, and if there's no other node where a given pod can schedule, the pod's node won't be scaled down. | |
* Pods that cannot be moved elsewhere due to scheduling constraints. CA simulates | |
kube-scheduler behavior, and if there's no other node where a given pod can schedule, the | |
pod's node won't be scaled down. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small nit: Wrap the line
* Pods that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity, | ||
matching anti-affinity, etc) | ||
* Pods that cannot be moved elsewhere due to scheduling constraints. CA simulates kube-scheduler behavior, and if there's no other node where a given pod can schedule, the pod's node won't be scaled down. | ||
* This can be particularly visible if a given workloads' pods are configured to only fit one pod per node on some subset of nodes. Such pods will always block CA from scaling down their nodes, because all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* This can be particularly visible if a given workloads' pods are configured to only fit one pod per node on some subset of nodes. Such pods will always block CA from scaling down their nodes, because all | |
* This can be particularly visible if a given workloads' pods are configured to only fit | |
one pod per node on some subset of nodes. Such pods will always block CA from scaling | |
down their nodes, because all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Wrap the line
/lgtm |
Prompted by #6208