Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically add workload cluster kubeconfig to Argo #1665

Closed
teemow opened this issue Nov 22, 2022 · 14 comments
Closed

Automatically add workload cluster kubeconfig to Argo #1665

teemow opened this issue Nov 22, 2022 · 14 comments
Assignees
Labels
effort/m Relative effort: medium impact/medium team/honeybadger Team Honey Badger

Comments

@teemow
Copy link
Member

teemow commented Nov 22, 2022

Some customers use ArgoCD on the management cluster. To be able to deploy on all workload clusters with ArgoCD the kubeconfig needs to be added to ArgoCD.

See upstream issue: argoproj/argo-cd#4651

@teemow
Copy link
Member Author

teemow commented Nov 23, 2022

@puja108 @QuentinBisson this is the issue I've created yesterday. Afaik there is another one. Where is it?

@puja108
Copy link
Member

puja108 commented Nov 28, 2022

Honeybadger has the bigger context right now and will try to quickly enable a workaround for the customer, but involving Rainbow to move the context of access to WCs from MC operators and MC isolation/security to rainbow

@puja108 puja108 moved this to Ready Soon (<4 weeks) in Roadmap Nov 29, 2022
@gianfranco-l
Copy link

team Honeybadger will document the context and process of RBAC management and then will kick off a pairing session with Rainbow to enable this use case

@puja108
Copy link
Member

puja108 commented Nov 29, 2022

Upstream Issue with pointer to a Kyverno-based solution: argoproj/argo-cd#4651

@puja108
Copy link
Member

puja108 commented Nov 29, 2022

I also cannot find the other issue anymore, maybe Quentin knows. Anyway, we'll focus on solving the customer problem now and taking up the use case mentioned by 2 customers.

@gianfranco-l gianfranco-l moved this from Backlog (Scheduled) to In Refinement in Customer Board (incl. Requests) 🧑🏾‍🤝‍🧑🏻 Nov 29, 2022
@gianfranco-l gianfranco-l added the kind/cross-team Epics that span across teams label Nov 29, 2022
@teemow
Copy link
Member Author

teemow commented Nov 30, 2022

@gianfranco-l the generic issues that needs to be handed over to rainbow are these:

This one here is about a solution for the customers. And the result of the discussion was that honey badger will solve this.

@teemow teemow removed team/rainbow kind/cross-team Epics that span across teams labels Nov 30, 2022
@kubasobon kubasobon added effort/s Relative effort: small impact/medium labels Nov 30, 2022
@gianfranco-l gianfranco-l moved this from In Refinement to Backlog (Scheduled) in Customer Board (incl. Requests) 🧑🏾‍🤝‍🧑🏻 Nov 30, 2022
@uvegla uvegla self-assigned this Nov 30, 2022
@gianfranco-l
Copy link

thank you very much for clarifying :)

@uvegla
Copy link

uvegla commented Dec 5, 2022

Update: added a kyverno policy to all clusters - except KVM ones at the moment, because they are special to other ones in this regard. The policy creates the Argo secret in the same namespace where the original kube config secret is.

The secrets follow the naming scheme of: <CLUSTER_NAME>-kubeconfig-argo and contain the following fields under .data:

config: <SEE_EXAMPLE_BELOW>
name: <CLUSTER_NAME>
server: <API_URL>

An example what is base64 encoded into the config field:

{
  "tlsClientConfig": {
    "insecure": false,
    "caData": "...",
    "certData": "...",
    "keyData": "..."
  }
}

Will look into KVM, but other cluster seem fine at the moment.

@teemow
Copy link
Member Author

teemow commented Dec 5, 2022

Thanks @uvegla ! There is no need to do this on KVM.

I'm not sure if we really should roll this out to all MCs. Copying cluster credentials where it isn't needed increases risk. This is only necessary for MCs where argo is installed. I'd say the kyverno policy belongs into the helm chart of argo.

@uvegla
Copy link

uvegla commented Dec 5, 2022

@teemow Fair point, we deemed it fine to add to all cluster because it is the same namespace as the kubeconfig resides in anyway. I feel it can be confusing in certain scenarios to find a secret called *-argo if you don't use argo. The argo chart actually makes sense, what do you think @mproffitt?

@uvegla
Copy link

uvegla commented Dec 5, 2022

@teemow @mproffitt One important note tho: the policy differs for legacy and CAPx clusters which may prove challenging to decide from a Helm chart.

@mproffitt
Copy link

The helm approach has a couple of drawbacks that I see.

  • As you point out, differentiating between legacy and CAPx clusters
  • Detecting when a customer decides to install argo and automatically installing the policy when applicable.

That said, I also agree with the points made by @teemow about risk.

kyverno itself can potentially come to the rescue here and by using an additional generate, it should be possible to have the policy on standby in all MCs but only have it in existence if it detects the existence of CRDs owned by ArgoCD. See https://kyverno.io/docs/writing-policies/generate/

WDYT? Is this something that would potentially satisfy all scenarios?

@teemow
Copy link
Member Author

teemow commented Dec 5, 2022

One customer is using a service account on the workload cluster. This seems to be a much better solution instead reusing the same kubeconfig to connect to the clusters. I'll add this information to the more general issue: #1666

We can deploy the kyverno policy on the manamagent cluster for the other customer that wants to use Argo imo. And then discuss the general solution in the other issue.

@gianfranco-l gianfranco-l added effort/m Relative effort: medium and removed effort/s Relative effort: small labels Dec 6, 2022
@uvegla
Copy link

uvegla commented Dec 12, 2022

Deployed it to the select customers management cluster. Closing this in favour of #1666

@uvegla uvegla closed this as completed Dec 12, 2022
Repository owner moved this from Ready Soon (<4 weeks) to Released in Roadmap Dec 12, 2022
Repository owner moved this from WIP to Shipped in Customer Board (incl. Requests) 🧑🏾‍🤝‍🧑🏻 Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort/m Relative effort: medium impact/medium team/honeybadger Team Honey Badger
Projects
Archived in project
Development

No branches or pull requests

6 participants