-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically add workload cluster kubeconfig to Argo #1665
Comments
@puja108 @QuentinBisson this is the issue I've created yesterday. Afaik there is another one. Where is it? |
Honeybadger has the bigger context right now and will try to quickly enable a workaround for the customer, but involving Rainbow to move the context of access to WCs from MC operators and MC isolation/security to rainbow |
team Honeybadger will document the context and process of RBAC management and then will kick off a pairing session with Rainbow to enable this use case |
Upstream Issue with pointer to a Kyverno-based solution: argoproj/argo-cd#4651 |
I also cannot find the other issue anymore, maybe Quentin knows. Anyway, we'll focus on solving the customer problem now and taking up the use case mentioned by 2 customers. |
@gianfranco-l the generic issues that needs to be handed over to rainbow are these:
This one here is about a solution for the customers. And the result of the discussion was that honey badger will solve this. |
thank you very much for clarifying :) |
Update: added a kyverno policy to all clusters - except KVM ones at the moment, because they are special to other ones in this regard. The policy creates the Argo secret in the same namespace where the original kube config secret is. The secrets follow the naming scheme of: config: <SEE_EXAMPLE_BELOW>
name: <CLUSTER_NAME>
server: <API_URL> An example what is base64 encoded into the {
"tlsClientConfig": {
"insecure": false,
"caData": "...",
"certData": "...",
"keyData": "..."
}
} Will look into KVM, but other cluster seem fine at the moment. |
Thanks @uvegla ! There is no need to do this on KVM. I'm not sure if we really should roll this out to all MCs. Copying cluster credentials where it isn't needed increases risk. This is only necessary for MCs where argo is installed. I'd say the kyverno policy belongs into the helm chart of argo. |
@teemow Fair point, we deemed it fine to add to all cluster because it is the same namespace as the kubeconfig resides in anyway. I feel it can be confusing in certain scenarios to find a secret called |
@teemow @mproffitt One important note tho: the policy differs for |
The helm approach has a couple of drawbacks that I see.
That said, I also agree with the points made by @teemow about risk.
WDYT? Is this something that would potentially satisfy all scenarios? |
One customer is using a service account on the workload cluster. This seems to be a much better solution instead reusing the same kubeconfig to connect to the clusters. I'll add this information to the more general issue: #1666 We can deploy the kyverno policy on the manamagent cluster for the other customer that wants to use Argo imo. And then discuss the general solution in the other issue. |
Deployed it to the select customers management cluster. Closing this in favour of #1666 |
Some customers use ArgoCD on the management cluster. To be able to deploy on all workload clusters with ArgoCD the kubeconfig needs to be added to ArgoCD.
See upstream issue: argoproj/argo-cd#4651
The text was updated successfully, but these errors were encountered: