-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chart/redis-ha][BUG] Can only do a rolling update of the ha-proxy deployment on a cluster with min 4 nodes #249
Comments
…date config, fix issue DandyDeveloper#249.
…date config, fix issue DandyDeveloper#249. Signed-off-by: Martijn van der Ploeg <[email protected]>
…date config, fix issue DandyDeveloper#249. Signed-off-by: Martijn van der Ploeg <[email protected]>
…date config, fix issue DandyDeveloper#249. Signed-off-by: Martijn van der Ploeg <[email protected]>
…date config, fix issue DandyDeveloper#249. Signed-off-by: Martijn van der Ploeg <[email protected]>
…date config, fix issue DandyDeveloper#249. Signed-off-by: Martijn van der Ploeg <[email protected]>
argocd has this also added to the deployent helm chart |
Hello @martijnvdp I've been very neglectful of this repo as I've moved overseas back to my home country. I'll try to jump on your PR ASAP to get it in, I'll hopefully be settled over the coming week or two. Thanks for your patience. |
Hello @DandyDeveloper, please take a look at the PR. It should be expected to perform a rolling upgrade using the default settings of 3 replicas |
…date config (#250) * [stable/redis-ha] add maxUnavailable value for the ha-proxy rollingUpdate config, fix issue #249. Signed-off-by: Martijn van der Ploeg <[email protected]> * Update Chart.yaml --------- Co-authored-by: Aaron Layfield <[email protected]>
Fixed in 4.25.1 per #250 |
Describe the bug
Can only do a rolling update of the ha-proxy deployment on a cluster with min 4 nodes
k rollout restart deployment argocd-redis-ha-haproxy
the pod affinity rules and the default spec:
requires a cluster of 4 nodes to work
changing the deployment spec to :
fixes the issue
To Reproduce
start a rolling update of the haproxy deployment on a cluster with 3 nodes with afinnity rules set
hardAntiAffinity: true
Expected behavior
i expected to be able to do a rolling update on a cluster with less than 4 nodes
Additional context
see also https://stackoverflow.com/questions/65063122/kubernetes-podantiaffinity-affects-deployment-failedscheduling-didnt-match
The text was updated successfully, but these errors were encountered: