-
Notifications
You must be signed in to change notification settings - Fork 3
Controller failing to prune deployments #96
Comments
Discriminators were introduces in Kluctl 2.19.0 and replace the use of Documentation can be found here: https://kluctl.io/docs/kluctl/reference/kluctl-project/#discriminator I would suggest to add a discriminator template to targets:
- target: my-target
...
discriminator: "my-deployment-name-{{ target.name }}" |
Thanks for the info!
|
Agree, thanks for pointing this out. Updated it here: kluctl/kluctl#419 I'll close this issue now as I assume everything is clarified. |
@codablock I think the status showing |
Oh, yes that is clearly misleading. I'll change it to show |
fyi, I have fixed this issue as part of kluctl/kluctl#486. I will not fix it in the classical flux-kluctl-controller. |
Controller Version
v0.15.0
Kubernetes Version
v1.26.4
Bug description
I noticed that when I change some things about my deployments such as resource names, the new versions get deployed without the old ones being cleaned up after.
When removing and recreating the
KluctlDeployment
resource, the deployed resources do not get removed from the cluster and keep running alongside the new deployment, in some cases leading to conflicts e.g. if a second ingress with the same domain and paths is created.prune
is set totrue
, but theKluctlDeployments
all contain this in their status:Interestingly, the status also says this:
so I'm a bit confused.
What exactly is this "discriminator" that seems to be missing?
I am aware that the
prune
anddelete
commands requirecommonLabels
to be specified in the kluctl project, but I have done that and every deployed resource also has the labels:Steps to reproduce
Not sure, it seems to happen with all of my projects including the simplest ones.
I did not have this issue with previous versions of the controller (not sure which version but pre-SOPS).
Relevant log output
The text was updated successfully, but these errors were encountered: