-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No privs to container, how to mount before starting? #70
Comments
I know this is a bit old, and you've probably already found an answer but I figured I'd at least comment on it. From what I understand, SYS_ADMIN is the only capability required to perform mounts within containers. In newer kernels (specifically Ubuntu or variants with AppArmor) SYS_ADMIN is no longer sufficient to perform mounts and you either need In your case, consider the container as either running or not running. There is no in-between state. So if you do not want to grant the appropriate permissions within the container to perform the mounting(i.e. when the container is running), then what you're thinking of is performing the mount on the container's host(i.e. when the container is not running yet) and then doing a simple mapping of "a local directory"(i.e. the one you mounted locally on the container host that is being shared by an NFS) into your container's directory. |
Hello Ryan, It is old issue, but I've not been able to prioritize and find a solution yet, so thank you for your help. I have a rough understanding, see if i'm following correctly;
The Script detects
So it appears the fstab commands are not working as intended. Does fstab (on start) also need privs to do own filesystem? or am i missing some other mount or action that requires priv? entrypoint.sh
Translates to;
|
Hey, sorry for the delay, I've given up using this as it did not accomplish what I was originally intending. I was attempting to containerize my NFS and use it as a sort of CSI driver replacement. The issue I had was the references to the nfs-server container that was hosting the NFS shares seemed to be unknown to the host (because Docker knows Docker DNS but the host itself doesn't) so all of the services/containers would fail to start citing a bad mount path. I have rolled the NFS back to the host level and just cut out all the permissions for the directories and opened them wide (nobody:nogroup and 777) and then set mounted the shares on my swarm nodes and changed the Docker compose config to point to the shares (which are mounted identically on all hosts). This works well enough for me, and originally I was not wanting to do this because it seems like a giant security issue setting permissions in this way...but I was ignoring the fact that I would otherwise have a container running as user 0 with high privileges, so there was really not much of a trade-off. To your questions though, I am not sure how to get around this limitation. It seems the problem is with the permissions required to use the nfs-kernel-server module. Something with permissions limitations imposed by Docker as a security measure that is unique to the way NFS and fstab mount stuff. I really didn't troubleshoot this any more after I ran into this issue and started looking at alternatives but my search kept turning up ones that required K8s and I don't run K8s so most of those were out of the question and by the time I got back to this I just decided to retry NFS (for a 3rd time) straight on the hosts. |
I would like to remove the following requirements to better secure the host.
Would it be possible to mount the shares prior to starting the container, rather than mounting them after startup? I'm investigating this now, but have a ways to go to fully understand.
First command at hand:
mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)')
The text was updated successfully, but these errors were encountered: