-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to run the provisioner on the ZFS host. #130
Comments
You can modify the kubeconfig file so that it uses a service account. The helm chart installs the necessary RBAC rules for the service account (https://github.com/ccremer/kubernetes-zfs-provisioner/blob/master/charts/kubernetes-zfs-provisioner/templates/rbac.yaml). I don't know the exact steps, but it should be possible, search around online a bit. Then, you can create a script named #!/bin/bash
set -eo pipefail
zfs_mountpoint="${2}"
chmod g+w "${zfs_mountpoint}" |
Hi @ccremer, I've managed to create a kubeconfig file for the service account. It works great. Regarding your suggestion of adding an /usr/bin/update-permissions file on the ZFS host, I think it would be nicer if we could just run |
🎉 Sure, that'll work, but I guess setting permission over SSH should remain the default, to avoid a breaking change. |
The default is to run kubernetes-zfs-provisioner in a container and create datasets via SSH on a remote host. To do that, the docker image is created with the zfs and update-permissions stubs that will both call commands on the remote host using SSH. Allows running kubernetes-zfs-provisioner directly on the ZFS host by making the update-permissions script presence optional. The zfs stub is already optional because it merely replaces the command of the same name on the remote host. The provisionner now uses the command specified in the ZFS_UPDATE_PERMISSIONS environment variable, which is set to /usr/bin/update-permissions by default in the docker image, otherwise it falls back to chmod. Fixes ccremer#130.
The default is to run kubernetes-zfs-provisioner in a container and create datasets via SSH on a remote host. To do that, the docker image is created with the zfs and update-permissions stubs that will both call commands on the remote host using SSH. Allows running kubernetes-zfs-provisioner directly on the ZFS host by making the update-permissions script presence optional. The zfs stub is already optional because it merely replaces the command of the same name on the remote host. The provisionner now uses the command specified in the ZFS_UPDATE_PERMISSIONS environment variable, which is set to /usr/bin/update-permissions by default in the docker image, otherwise it falls back to chmod. Fixes ccremer#130.
The default is to run kubernetes-zfs-provisioner in a container and create datasets via SSH on a remote host. To do that, the docker image is created with the zfs and update-permissions stubs that will both call commands on the remote host using SSH. Allows running kubernetes-zfs-provisioner directly on the ZFS host by making the update-permissions script presence optional. The zfs stub is already optional because it merely replaces the command of the same name on the remote host. The provisionner now uses the update-permissions executable if it's present in the current PATH, otherwise it falls back to executing chmod g+w directly. Fixes ccremer#130.
The default is to run kubernetes-zfs-provisioner in a container and create datasets via SSH on a remote host. To do that, the docker image is created with the zfs and update-permissions stubs that will both call commands on the remote host using SSH. Allows running kubernetes-zfs-provisioner directly on the ZFS host by making the update-permissions script presence optional. The zfs stub is already optional because it merely replaces the command of the same name on the remote host. The provisionner now uses the update-permissions executable if it's present in the current PATH, otherwise it falls back to executing chmod g+w directly. Fixes ccremer#130.
As in issue #85, my ZFS host is part of the cluster as a worker node. I'm not a real fan of using SSH from the provisioner container to run commands on the ZFS host and I'd much prefer running the provisioner as a daemon directly on the ZFS host.
So far I've been able to make it work by running
kubernetes-zfs-provisioner
directly on the ZFS host with theZFS_KUBE_CONFIG_PATH
environment variable pointing to my "admin" kubeconfig.Obviously this is not ideal because the "admin" user permissions are too open for what the provisioner has to do. What would be the right thing to do instead here?
It also required this tiny change:
The change is quite simple but it does break the "normal" use-case though. I'm not sure how we could make it more generic.
The text was updated successfully, but these errors were encountered: