Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent provisioning of persistent volumes on nodes with an empty paths list #163

Open
Exagone313 opened this issue Dec 16, 2020 · 3 comments
Labels
documentation question Further information is requested

Comments

@Exagone313
Copy link

Hi,

I have a cluster with two nodes:

  • the first node is home to deployments using local-path-provisioner (aka first.example)
  • the second node (aka second.example) was recently added to try to add room for deployments that do not need a persistent volume.

local-path-provisioner has been configured to have an empty paths list for nodes others that the first node:

    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":[]
            },
            {
                    "node":"first.example",
                    "paths":["/opt/local-path-provisioner"]
            }
            ]
    }

However, it seems that local-path-provisioner is trying to create persistent volumes on the second node:

error syncing claim "e0b17b36-47e5-416f-a4c7-5eada4db8165": failed to provision volume with StorageClass "local-path": no local path available on node second.example

My guess is that local-path-provisioner is not responsible for that choice, and that it's just a "fail-safe".

If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.

The node is selected by the scheduler for the volume:

/* vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/volume.go */

// ProvisionOptions contains all information required to provision a volume
type ProvisionOptions struct {
	 /* ... */
	// Node selected by the scheduler for the volume.
	SelectedNode *v1.Node
}

But the pod that interacts with the persistent volume claim is not yet scheduled on a node (status Pending), maybe Kubernetes misses a mechanism for solving such case.

With a nodeSelector on the pod, the persistent volume is correctly created.

If I am correct, this issue can be converted into a documentation issue and I'm willing to add a sentence for that case.


  • Kubernetes v1.18.9
  • rancher/local-path-provisioner:v0.0.17
@innobead
Copy link
Collaborator

innobead commented Mar 2, 2021

Yes, local-path-provsioner adopts WaitForFirstConsumer volume binding mode to wait for the consumer pod arranged. This is the native behavior of dynamic provision.

We will enhance the README a bit to make sure the case clear for the users, also welcome to contribute!

@innobead innobead added question Further information is requested documentation labels Mar 2, 2021
@ErikLundJensen
Copy link

I suggest that you use the nodeSelector on the DaemonSet to install the provisioner only at the nodes that actually has local storage.

@plasorak
Copy link

plasorak commented Jul 6, 2022

Hi, I'm running in the same issue.

I suggest that you use the nodeSelector on the DaemonSet to install the provisioner only at the nodes that actually has local storage.

Could you be a bit more precise? Which DeamonSet are you talking about?

I'm not sure I understand why putting a path: [] for the default nodes shouldn't work. Am I correct in assuming that if you created manually a pair of PV/PVC, the pod that uses it will run on a node which is able to satisfy this claim? If so why shouldn't this work? I'd like to avoid having to use the nodeSelector on my pod.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants