Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting More Authentication Mechanisms #42

Open
everpeace opened this issue Jun 29, 2023 · 9 comments
Open

Supporting More Authentication Mechanisms #42

everpeace opened this issue Jun 29, 2023 · 9 comments
Labels
enhancement New feature or request

Comments

@everpeace
Copy link

everpeace commented Jun 29, 2023

Hi, Thank you very much for the great project! I'm really surprised that FUSE can run in the sidecar container without any privileges!

As kubernetes platform admin point of view, supporting FUSE was difficult(risky) because we have to give privilege to FUSE containers in application. But, this project proved it can breaks the limitation (thanks to "file descriptor passing" between CSI driver and FUSE sidecar which can encapsulate privileged operations in the CSI driver).

Context/Scenario

  • I(platform admin) develops a in-house kubernetes platform for internal application developers
  • I would like to support gcs-fuse-csi-driver in our clusters(multiple clusters)
  • GCP project of the kubernetes clusters are managed by us(platform admin)
  • But, each GCP project for applications is fully owned by the application developers

The Problem

Current, gcs-fuse-csi-driver implementation depends on Workload Identity.

However, if I understood correctly, if the application runs in multiple kubernetes clusters, application developer has to create iam-policy-binding for each k8s cluster(k8s service account). It is because applications running on different cluster have different Workload Identities. That also means the application developer will need to update iam-policy-binding whenever our cluster is added/removed.

As a platform admin, the UX is not so convenient. I would like to reduce this toils on the application developer side.

Proposals

Option 1. Supporting GCP Service Account's Private Key in Kubernetes Secret

This would be handy. Of course, I understand Workload Identity is more secure than long lived(never expired) secret key file.

Our platform can provide a feature which syncs the secret across our clusters. In this case, application developers need nothing when the cluster which the application runs on is added/reduced. What the application developers need is only to specify the secret name in their manifest.

By the way, gcsfuse also accepts key-file as cli argument. But, gcs-fuse-csi-driver explicitly prohibits to use the argument. Is there any reason for this??

In this option, I imagined below changes:

  • supports extra attribute (say secretName)in volumeAttributes (also in MountConfig
  • csi-driver
    • reads the kubernetes secret
    • store it to somewhere shared by the sidecar container (/gcsfuse-tmp/.volumes/<volume-name>/service_account.json?),
    • set the path to MountConfig (we need to add a field for this),
    • and pass it to the sidecar
  • sidecar-mounter run gcsfuse with key-file=...

Option 2. Supporting Workload Identity Federation

This would be more secure and might be standard. Recently, there exists application identification mechanism which is not tied with single kubernetes cluster's authority (e.g. spiffee). By using this, application can have stable application identity even if the application runs on multiple kubernetes clusters.

I think this can completely fits with Workload Identity Federation use case.

In this option, I imagined below changes:

  • support volumeAttributes required for workload federation, say
    • workloadIdentityProvider
    • serviceAccountEmail
  • also support annotation for application identity info which is assumed the kubernetes platform is responsible to provide
    • gke-gcsfuse/credential-source-volume
    • gke-gcsfuse/credential-source-file
  • webhook injects
    • volumeMount to sidecar container for gke-gcsfuse/credential-source-volume
    • add extra args for application credential file to the sidecar-mounter
  • csi-driver
    • reads the attributes, set it into MountConfig, and pass to the sidecar
  • sidecar-mounter
    • bootstraps credential configuration file from the provided information (/gcsfuse-tmp/.volumes/<volume-name>/credential_configuration.json can be used?)
    • then, it runs gcsfuse with key-file=...

I would be very appreciated if I got feedbacks. Thanks in advance.

@everpeace everpeace changed the title Flexible Authentication Support More Authentication Mechanism Support Jun 29, 2023
@everpeace everpeace changed the title More Authentication Mechanism Support Supporting More Authentication Mechanisms Jun 29, 2023
@songjiaxun songjiaxun added the enhancement New feature or request label Jul 13, 2023
@songjiaxun
Copy link
Contributor

Thank you so much for illustrating the problem and context so clearly, and proposing potential solutions. We will take your proposals into consideration for the upcoming releases.

Just to clarify: Is the current design (only supporting Workload Identity) blocking your development on your in-house Kubernetes clusters? Or is the proposal just for avoiding the toil?

@everpeace
Copy link
Author

everpeace commented Jul 14, 2023

Thank you so much for illustrating the problem and context so clearly, and proposing potential solutions. We will take your proposals into consideration for the upcoming releases.

Thank you very much!

Just to clarify: Is the current design (only supporting Workload Identity) blocking your development on your in-house Kubernetes clusters? Or is the proposal just for avoiding the toil?

Actually, not a blocker currently because the number of cluster is not so many. But it could be a problem in the near future.

We're using Fleet Workload Identity. I now understand supporting Workload Identity Federation have priority.

@xieydd
Copy link

xieydd commented Oct 25, 2023

Context/Scenario:

  • As a managed service, the user`s service account keys are stored in our platform.
  • The developers want to mount there gcs to the inference pod

Thanks a lot for @everpeace proposing potential solutions. I think the first option is very similar to my scenario.

Is there any progress on this issue? @songjiaxun

p.s. ofek/csi-gcs looks like a good choice.

@songjiaxun
Copy link
Contributor

Hi @xieydd , we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE. Workload Identity allows you to configure a Kubernetes service account to act as a Google service account, and avoid managing and protecting secrets manually. Please try to migrate to Workload Identity. Thank you!

@everpeace
Copy link
Author

everpeace commented Oct 26, 2023

Thanks for the update.

we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE.

I think this is a reasonable decision in terms of security (long-lived key is dangerous, seldom rotated, hard to rotate safely, etc.). I can support this.

Option 2. Supporting Workload Identity Federation

Are there any plan for federated identity support other than workload identity (e.g. spiffee)?? Workload identity and Workload identity federation depends on very similar mechanism. So, I suppose there would be no security risk to support this.

@xieydd
Copy link

xieydd commented Oct 26, 2023

Hi @xieydd , we have made the decision to not support service account keys. Workload Identity is the recommended way to access Google Cloud services from within GKE. Workload Identity allows you to configure a Kubernetes service account to act as a Google service account, and avoid managing and protecting secrets manually. Please try to migrate to Workload Identity. Thank you!您好,我们已决定不支持服务帐户密钥。 Workload Identity 是从 GKE 内访问 Google Cloud 服务的推荐方式。 Workload Identity 允许您将 Kubernetes 服务帐户配置为充当 Google 服务帐户,并避免手动管理和保护机密。请尝试迁移到 Workload Identity。谢谢你!

Thanks for your reply, I will find out Workload Identity.

@songjiaxun
Copy link
Contributor

Hi @everpeace , I will spend some time doing my research on the federated identity, and will keep you updated.

@sshcherbakov
Copy link

Hello All,

It seems that the Workload Identity Federation is not supported by this CSI driver yet.

This is very unfortunate, since hence GCS CSI driver cannot run outside of Google Cloud, since it relies on the metadata service present on the nodes.

That in turn makes GCS CSI driver not available in GKE on VMware, GKE on Bare Metal and other GKE Enterprise favours, which customers would expect, since these are Google Cloud products.

A sample Workload Identity Federation support is implemented and is working well in Google Cloud Secret Manager CSI Driver

Is my understanding correct and there is no way of mounting GCS buckets into Kubernetes clusters running outside of Google Cloud (using this driver)?

sshcherbakov added a commit to sshcherbakov/gcs-fuse-csi-driver that referenced this issue Apr 9, 2024
sshcherbakov added a commit to sshcherbakov/gcs-fuse-csi-driver that referenced this issue Apr 10, 2024
sshcherbakov added a commit to sshcherbakov/gcs-fuse-csi-driver that referenced this issue Apr 15, 2024
@songjiaxun
Copy link
Contributor

As for now, unfortunately, we still don't have enough bandwidth to work on other auth method support. However, I've created a POC branch that supports GCP SA keys: 0d32b40

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants