-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update FAQ.md #6208
Update FAQ.md #6208
Conversation
Looks like there's also this scenario that can prevent CAS from scaling down. It's been especially observed with Calico-typha. 1. Calico-typha will use a hostPort configuration to listen a port on host by design. This means only max 1 pod of calico-typha can exist on the same node. 2. Calico-typha is a deployment, which has a “auto-scale” configuration from the tigera operator to set its replica count according to node count. 3. When scaling down, cluster-autoscaler will try to find another node that can schedule the pod, however it will be blocked due to (1), as every other node also have typha scheduled.
|
Welcome @axelgMS! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: axelgMS The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Given I'm not sure about the EasyCLA, I'm opening an issue so that someone can update the doc - if relevant ? |
@axelgMS |
To check EasyCLA /easycla |
/assign @towca |
CLA not signed, we are unable to take the contribution. Please reopen once signed. |
This is already captured (perhaps not clearly enough) in this point of the FAQ:
If a scheduled pod can't be scheduled on any other node in the cluster, Cluster Autoscaler won't evict it - otherwise it would just stay pending, or trigger a scale-up. The described scenario with hostPorts is just one example of this. We should probably make the wording of that point clearer. We can also list the hostPort scenario as an example, but it shouldn't be a separate point. @lualvare I'll try to update the FAQ some time next week, or you can always send out a PR of course. |
Hi @towca, Thank you so much for the reply, I think this would be good enough to make the wording clearer and add that example for hostPort scenarios, this will definitely help when cases like this are met. Have a good day. Thanks. |
Looks like there's also this scenario that can prevent CAS from scaling down. It's been especially observed with Calico-typha.
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: