-
Notifications
You must be signed in to change notification settings - Fork 515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] New pods shortly deleted, and the old pods remains. ArgoCD #627
Comments
@qdrddr the way reloader works is by updating env var, which triggers a deployment change and update it sent to the pods. Could there be any case that argocd is actively reverting the changes to the deployment done by reloader, in that case new pods will be deleted and old state will be persisted? |
@MuneebAijaz would you recommend how to workaround this and continue using ArgoCD? |
@qdrddr i think this would need more investigation on if the reason really is ArgoCD. and if it is, should it watch the Env field under deployments. if not, ArgoCD provides ways to ignore specific fields in specific resources |
Could you point to the documents for this? Also, I have doubts about the proposed workaround to set ArgoCD to skip checking parts of a resource, as even if ArgoCD ignores Envs, it still will see and kill extra containers with auto-prune settings regardless of changes in Env. So basically, I cannot use Reloader with ArgoCD with enabled auto pruning. @MuneebAijaz |
The problem is that Reloader creates additional containers before killing outdated containers to reduce impact, which is an excellent strategy. But ArgoCD, with enabled Pruning, notices an extra container and kills it before Reloader gets a chance to kill the outdated container. Resulting in the outdated containers remain unchanged. Do you know if ArgoCD integration is needed here? Ideas on how this can be fixed:
|
yes, there are implications to that approach. but not the ones stated above.
Reloader doesnt do that, Reloader only updates ENV field in the parent resource (Deployment, Statefulset, Daemonset), and when an ENV is updated, Deployment etc are bound to propagate that change to the pods, so the parent resource spins up another ReplicaSet with new ENV, and Replicaset creates new pod with updated ENV. that is how update is done by Reloader. Reloader itself doesnt manage pod/container lifecycle, to not have any effect on user's application, it relies on already set deployment strategy in Deployments to propagate that change. |
I will try to replicate the issue on my end, and get back to you. |
@0123hoang Nope, the problem persists with |
we are also facing the same issue |
I would debug this by disabling self-heal for the responsible argo app, let reloader do its thing and afterwards check the argo app. My guess is that the application is out of sync and argo is immediately reverting because of that. |
After the reloader propagated changes and while the new pods were reloading, I clicked the ArgoCD Sync button. As a result, the new pods were immediately deleted and replaced with the old pods. I think ArgoCD auto sync revert changes from reloader. |
Have you tried setting reload strategy as annotations? Related issue: #701 |
Describe the bug
Tested with strategy: default or env-vars. Using
ArgoCD
. ArgoCD appory\oathkeeper
that includes the ConfigMapaccessrules
to monitor. The deployment has annotation:configmap.reloader.stakater.com/reload: accessrules
When I modify the ConfigMap in git, the reloader notices the change, creates new pods, but they are deleted shortly after creation and the old version remains intact.
To Reproduce
Steps to reproduce the behavior
Expected behavior
The old pod should be deleted and the new pod remains.
Logs from the Reloader
Environment
Additional context
the helm values file:
The text was updated successfully, but these errors were encountered: