Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update FAQ.md #6208

Closed
wants to merge 1 commit into from
Closed

Update FAQ.md #6208

wants to merge 1 commit into from

Conversation

axelgMS
Copy link

@axelgMS axelgMS commented Oct 19, 2023

Looks like there's also this scenario that can prevent CAS from scaling down. It's been especially observed with Calico-typha.

  1. Calico-typha will use a hostPort configuration to listen a port on host by design. This means only max 1 pod of calico-typha can exist on the same node.
  2. Calico-typha is a deployment, which has a “auto-scale” configuration from the tigera operator to set its replica count according to node count.
  3. When scaling down, cluster-autoscaler will try to find another node that can schedule the pod, however it will be blocked due to (1), as every other node also have typha scheduled.

What type of PR is this?

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


Looks like there's also this scenario that can prevent CAS from scaling down. It's been especially observed with Calico-typha.

1.	Calico-typha will use a hostPort configuration to listen a port on host by design. This means only max 1 pod of calico-typha can exist on the same node.
2.	Calico-typha is a deployment, which has a “auto-scale” configuration from the tigera operator to set its replica count according to node count.
3.	When scaling down, cluster-autoscaler will try to find another node that can schedule the pod, however it will be blocked due to (1), as every other node also have typha scheduled.
@linux-foundation-easycla
Copy link

CLA Not Signed

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Oct 19, 2023
@k8s-ci-robot
Copy link
Contributor

Welcome @axelgMS!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Oct 19, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: axelgMS
Once this PR has been reviewed and has the lgtm label, please assign towca for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@axelgMS
Copy link
Author

axelgMS commented Oct 19, 2023

Given I'm not sure about the EasyCLA, I'm opening an issue so that someone can update the doc - if relevant ?

#6209

@Shubham82
Copy link
Contributor

@axelgMS
you have to sign the CLA before the PR can be reviewed.
See the following document to sign the CLA: Signing Contributor License Agreements(CLA)

@Shubham82
Copy link
Contributor

To check EasyCLA

/easycla

@towca
Copy link
Collaborator

towca commented Nov 10, 2023

/assign @towca

@mwielgus
Copy link
Contributor

CLA not signed, we are unable to take the contribution. Please reopen once signed.

@mwielgus mwielgus closed this Nov 14, 2023
@lualvare
Copy link

lualvare commented Feb 8, 2024

Hi @towca,

I have experienced the same problem as reported on this issue #6209 , could you help to get CAS FAQ updated with this scenario which prevents CAS to scale down?

Thank you.

@towca
Copy link
Collaborator

towca commented Feb 9, 2024

This is already captured (perhaps not clearly enough) in this point of the FAQ:

* Pods that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity,
matching anti-affinity, etc)

If a scheduled pod can't be scheduled on any other node in the cluster, Cluster Autoscaler won't evict it - otherwise it would just stay pending, or trigger a scale-up. The described scenario with hostPorts is just one example of this. We should probably make the wording of that point clearer. We can also list the hostPort scenario as an example, but it shouldn't be a separate point.

@lualvare I'll try to update the FAQ some time next week, or you can always send out a PR of course.

@lualvare
Copy link

lualvare commented Feb 9, 2024

Hi @towca,

Thank you so much for the reply, I think this would be good enough to make the wording clearer and add that example for hostPort scenarios, this will definitely help when cases like this are met.

Have a good day.

Thanks.

@lualvare
Copy link

Hello @towca,

I hope you are doing well, just wanted to follow up on this PR to see if this is would be updated as a feasible update to the CAS FAQ?

@towca
Copy link
Collaborator

towca commented Feb 26, 2024

@lualvare Thanks for the reminder, sent out #6567!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants