Skip to content
This repository has been archived by the owner on Mar 28, 2024. It is now read-only.

Controller failing to prune deployments #96

Open
netthier opened this issue Apr 19, 2023 · 6 comments · Fixed by kluctl/kluctl#419
Open

Controller failing to prune deployments #96

netthier opened this issue Apr 19, 2023 · 6 comments · Fixed by kluctl/kluctl#419

Comments

@netthier
Copy link

Controller Version

v0.15.0

Kubernetes Version

v1.26.4

Bug description

I noticed that when I change some things about my deployments such as resource names, the new versions get deployed without the old ones being cleaned up after.
When removing and recreating the KluctlDeployment resource, the deployed resources do not get removed from the cluster and keep running alongside the new deployment, in some cases leading to conflicts e.g. if a second ingress with the same domain and paths is created.
prune is set to true, but the KluctlDeployments all contain this in their status:

  Last Prune Result:
    Error:         pruning without a discriminator is not supported
    Objects Hash:  63f7995789469181cf13f09e4824a40246eba2410188c32b2ea67c4eebe6f8f3
    Revision:      refs/heads/main/64c7ad46e099791ebdc2590bace7eb815505f51f
    Target:        oldstable
    Time:          2023-04-19T09:47:28Z

Interestingly, the status also says this:

  Conditions:
    Last Transition Time:   2023-04-19T09:50:31Z
    Message:                deploy: ok, prune: ok, validate: ok
    Reason:                 ReconciliationSucceeded
    Status:                 True
    Type:                   Ready

so I'm a bit confused.
What exactly is this "discriminator" that seems to be missing?
I am aware that the prune and delete commands require commonLabels to be specified in the kluctl project, but I have done that and every deployed resource also has the labels:

# deployment.yaml
commonLabels:
  kluctl.myorg.com/project: "super-cool-project"
  kluctl.myorg.com/target: "{{ target.name }}"

Steps to reproduce

Not sure, it seems to happen with all of my projects including the simplest ones.
I did not have this issue with previous versions of the controller (not sure which version but pre-SOPS).

Relevant log output

{"level":"info","ts":"2023-04-19T15:06:21.101Z","msg":"No discriminator configured. Orphan object detection will not work","controller":"kluctldeployment","controllerGroup":"flux.kluctl.io","controllerKind":"KluctlDeployment","KluctlDeployment":{"name":"project-oldstable","namespace":"kluctl"},"namespace":"kluctl","name":"project-oldstable","reconcileID":"33a65933-9d04-41d9-bb4b-11a8062f8954"}
{"level":"info","ts":"2023-04-19T15:06:21.101Z","msg":"No discriminator configured for target, retrieval of remote objects will be slow.","controller":"kluctldeployment","controllerGroup":"flux.kluctl.io","controllerKind":"KluctlDeployment","KluctlDeployment":{"name":"project-oldstable","namespace":"kluctl"},"namespace":"kluctl","name":"project-oldstable","reconcileID":"33a65933-9d04-41d9-bb4b-11a8062f8954"}
@codablock
Copy link
Contributor

Discriminators were introduces in Kluctl 2.19.0 and replace the use of commonLabels for identification of objects belonging to a project/target.

Documentation can be found here: https://kluctl.io/docs/kluctl/reference/kluctl-project/#discriminator

I would suggest to add a discriminator template to .kluctl.yaml that at least takes the deployment name and target name into account, something like:

targets:
- target: my-target
  ...

discriminator: "my-deployment-name-{{ target.name }}"

@netthier
Copy link
Author

Thanks for the info!
Maybe the documentation here should be updated then: https://kluctl.io/docs/kluctl/reference/deployments/deployment-yml/#commonlabels

commonLabels
[...]
The root deployment’s commonLabels is also used to identify objects to be deleted when performing kluctl delete or kluctl prune operations

@codablock
Copy link
Contributor

Agree, thanks for pointing this out. Updated it here: kluctl/kluctl#419
In the future, feel free to test out the "Edit this page" feature on the docs page if you want, I'd be happy to see contributions in all forms :)

I'll close this issue now as I assume everything is clarified.

@netthier
Copy link
Author

@codablock I think the status showing deploy: ok, prune: ok, validate: ok despite prune explicitly erroring out is a bit confusing and probably a bug.

@codablock
Copy link
Contributor

Oh, yes that is clearly misleading. I'll change it to show prune: failed in the next release.

@codablock codablock reopened this Apr 20, 2023
@codablock
Copy link
Contributor

fyi, I have fixed this issue as part of kluctl/kluctl#486. I will not fix it in the classical flux-kluctl-controller.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants