Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Recreating existing pods when replicas changed #543

Open
davordbetter opened this issue May 17, 2024 · 2 comments
Open

[BUG] Recreating existing pods when replicas changed #543

davordbetter opened this issue May 17, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@davordbetter
Copy link

Describe the bug
At the moment I have 2 replicas

opensearch-cluster-master-0               1/1     Running   0          14m
opensearch-cluster-master-1               1/1     Running   0          16m

And if I changed in values.yaml
replicas: 2 -> 3
I get running master-2, but master-1 is terminated and after master-1 is ready again, master-0 gets terminated
opensearch-cluster-master-0 1/1 Running 0 15m
opensearch-cluster-master-1 1/1 Terminating 0 18m
opensearch-cluster-master-2 1/1 Running 0 36s

To Reproduce
Installed chart as subchart and added this to my values.yaml

opensearch:
  enabled: true
  clusterName: "opensearch-cluster"
  image:
    tag: 2.14.0
  replicas: 3
  persistence:
    storageClass: "azuredisk-csi-premium-retain"
    size: 20Gi
  envFrom:
    - secretRef:
        name: app-opensearch
    - configMapRef:
        name: app-opensearch
  config:
    opensearch.yml: |
      cluster.name: opensearch-cluster
      network.host: 0.0.0.0
  sysctlInit:
    enabled: true

Expected behavior
Not terminating existing masters

Host/Environment (please complete the following information):

  • Helm Version: v3.14.2
  • Kubernetes Version: 1.28.5

Additional logs:
opensearch-master-0.log

@davordbetter davordbetter added bug Something isn't working untriaged Issues that have not yet been triaged labels May 17, 2024
@prudhvigodithi
Copy link
Member

[Triage]
Hey @davordbetter I assume the pods are restarted to reach the quorum. After sometime is the cluster stable after increasing the replicas?

@davordbetter
Copy link
Author

Correct.

I tried kubectl scale and it worked as expected (scape up, only adding pod-2. scale down, only terminating pod-2).

While if I scale via helm chart.
pod-0 and pod-1 are alive and healty.
scale up to 3
pod-2 init and when it becomes healty, pod-1 is terminated and recreated. When pod-1 is healhty, same goes for pod-0

@prudhvigodithi prudhvigodithi removed the untriaged Issues that have not yet been triaged label Jun 6, 2024
@getsaurabh02 getsaurabh02 moved this from 🆕 New to Later (6 months plus) in Engineering Effectiveness Board Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Backlog
Development

No branches or pull requests

2 participants