Skip to content

Commit

Permalink
Update FAQ.md
Browse files Browse the repository at this point in the history
Looks like there's also this scenario that can prevent CAS from scaling down. It's been especially observed with Calico-typha.

1.	Calico-typha will use a hostPort configuration to listen a port on host by design. This means only max 1 pod of calico-typha can exist on the same node.
2.	Calico-typha is a deployment, which has a “auto-scale” configuration from the tigera operator to set its replica count according to node count.
3.	When scaling down, cluster-autoscaler will try to find another node that can schedule the pod, however it will be blocked due to (1), as every other node also have typha scheduled.
  • Loading branch information
axelgMS authored Oct 19, 2023
1 parent a3a29cf commit 56b4871
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions cluster-autoscaler/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ Cluster Autoscaler decreases the size of the cluster when some nodes are consist
* are not run on the node by default, *
* don't have a [pod disruption budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work) set or their PDB is too restrictive (since CA 0.6).
* Pods that are not backed by a controller object (so not created by deployment, replica set, job, stateful set etc). *
* Pods with a hostPort configuration - causing the scheduler to fit only one of those pods per node. Scale-down might not work as the existing nodes will already have a pod on them and can't schedule the new one.
* Pods with local storage **. *
- unless the pod has the following annotation set:
```
Expand Down

0 comments on commit 56b4871

Please sign in to comment.