-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supporting More Authentication Mechanisms #42
Comments
Thank you so much for illustrating the problem and context so clearly, and proposing potential solutions. We will take your proposals into consideration for the upcoming releases. Just to clarify: Is the current design (only supporting Workload Identity) blocking your development on your in-house Kubernetes clusters? Or is the proposal just for avoiding the toil? |
Thank you very much!
Actually, We're using Fleet Workload Identity. I now understand supporting Workload Identity Federation have priority. |
Context/Scenario:
Thanks a lot for @everpeace proposing potential solutions. I think the first option is very similar to my scenario. Is there any progress on this issue? @songjiaxun p.s. ofek/csi-gcs looks like a good choice. |
Hi @xieydd , we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE. Workload Identity allows you to configure a Kubernetes service account to act as a Google service account, and avoid managing and protecting secrets manually. Please try to migrate to Workload Identity. Thank you! |
Thanks for the update.
I think this is a reasonable decision in terms of security (long-lived key is dangerous, seldom rotated, hard to rotate safely, etc.). I can support this.
Are there any plan for federated identity support other than workload identity (e.g. spiffee)?? Workload identity and Workload identity federation depends on very similar mechanism. So, I suppose there would be no security risk to support this. |
Thanks for your reply, I will find out |
Hi @everpeace , I will spend some time doing my research on the federated identity, and will keep you updated. |
Hello All, It seems that the Workload Identity Federation is not supported by this CSI driver yet. This is very unfortunate, since hence GCS CSI driver cannot run outside of Google Cloud, since it relies on the metadata service present on the nodes. That in turn makes GCS CSI driver not available in GKE on VMware, GKE on Bare Metal and other GKE Enterprise favours, which customers would expect, since these are Google Cloud products. A sample Workload Identity Federation support is implemented and is working well in Google Cloud Secret Manager CSI Driver Is my understanding correct and there is no way of mounting GCS buckets into Kubernetes clusters running outside of Google Cloud (using this driver)? |
As for now, unfortunately, we still don't have enough bandwidth to work on other auth method support. However, I've created a POC branch that supports GCP SA keys: 0d32b40 |
Hi, Thank you very much for the great project! I'm really surprised that FUSE can run in the sidecar container without any privileges!
As kubernetes platform admin point of view, supporting FUSE was difficult(risky) because we have to give privilege to FUSE containers in application. But, this project proved it can breaks the limitation (thanks to "file descriptor passing" between CSI driver and FUSE sidecar which can encapsulate privileged operations in the CSI driver).
Context/Scenario
The Problem
Current,
gcs-fuse-csi-driver
implementation depends on Workload Identity.However, if I understood correctly, if the application runs in multiple kubernetes clusters, application developer has to create iam-policy-binding for each k8s cluster(k8s service account). It is because applications running on different cluster have different Workload Identities. That also means the application developer will need to update iam-policy-binding whenever our cluster is added/removed.
As a platform admin, the UX is not so convenient. I would like to reduce this toils on the application developer side.
Proposals
Option 1. Supporting GCP Service Account's Private Key in Kubernetes Secret
This would be handy. Of course, I understand Workload Identity is more secure than long lived(never expired) secret key file.
Our platform can provide a feature which syncs the secret across our clusters. In this case, application developers need nothing when the cluster which the application runs on is added/reduced. What the application developers need is only to specify the secret name in their manifest.
By the way,
gcsfuse
also acceptskey-file
as cli argument. But,gcs-fuse-csi-driver
explicitly prohibits to use the argument. Is there any reason for this??In this option, I imagined below changes:
secretName
)in volumeAttributes (also inMountConfig
/gcsfuse-tmp/.volumes/<volume-name>/service_account.json
?),MountConfig
(we need to add a field for this),gcsfuse
withkey-file=...
Option 2. Supporting Workload Identity Federation
This would be more secure and might be standard. Recently, there exists application identification mechanism which is not tied with single kubernetes cluster's authority (e.g. spiffee). By using this, application can have stable application identity even if the application runs on multiple kubernetes clusters.
I think this can completely fits with Workload Identity Federation use case.
In this option, I imagined below changes:
workloadIdentityProvider
serviceAccountEmail
gke-gcsfuse/credential-source-volume
gke-gcsfuse/credential-source-file
volumeMount
to sidecar container forgke-gcsfuse/credential-source-volume
MountConfig
, and pass to the sidecar/gcsfuse-tmp/.volumes/<volume-name>/credential_configuration.json
can be used?)gcsfuse
withkey-file=...
I would be very appreciated if I got feedbacks. Thanks in advance.
The text was updated successfully, but these errors were encountered: