You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, it seems that local-path-provisioner is trying to create persistent volumes on the second node:
error syncing claim "e0b17b36-47e5-416f-a4c7-5eada4db8165": failed to provision volume with StorageClass "local-path": no local path available on node second.example
My guess is that local-path-provisioner is not responsible for that choice, and that it's just a "fail-safe".
If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
The node is selected by the scheduler for the volume:
/* vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/controller/volume.go */// ProvisionOptions contains all information required to provision a volumetypeProvisionOptionsstruct {
/* ... */// Node selected by the scheduler for the volume.SelectedNode*v1.Node
}
But the pod that interacts with the persistent volume claim is not yet scheduled on a node (status Pending), maybe Kubernetes misses a mechanism for solving such case.
With a nodeSelector on the pod, the persistent volume is correctly created.
If I am correct, this issue can be converted into a documentation issue and I'm willing to add a sentence for that case.
Kubernetes v1.18.9
rancher/local-path-provisioner:v0.0.17
The text was updated successfully, but these errors were encountered:
Yes, local-path-provsioner adopts WaitForFirstConsumer volume binding mode to wait for the consumer pod arranged. This is the native behavior of dynamic provision.
We will enhance the README a bit to make sure the case clear for the users, also welcome to contribute!
I suggest that you use the nodeSelector on the DaemonSet to install the provisioner only at the nodes that actually has local storage.
Could you be a bit more precise? Which DeamonSet are you talking about?
I'm not sure I understand why putting a path: [] for the default nodes shouldn't work. Am I correct in assuming that if you created manually a pair of PV/PVC, the pod that uses it will run on a node which is able to satisfy this claim? If so why shouldn't this work? I'd like to avoid having to use the nodeSelector on my pod.
Hi,
I have a cluster with two nodes:
first.example
)second.example
) was recently added to try to add room for deployments that do not need a persistent volume.local-path-provisioner has been configured to have an empty
paths
list for nodes others that the first node:However, it seems that local-path-provisioner is trying to create persistent volumes on the second node:
My guess is that local-path-provisioner is not responsible for that choice, and that it's just a "fail-safe".
The node is selected by the scheduler for the volume:
But the pod that interacts with the persistent volume claim is not yet scheduled on a node (status
Pending
), maybe Kubernetes misses a mechanism for solving such case.With a
nodeSelector
on the pod, the persistent volume is correctly created.If I am correct, this issue can be converted into a documentation issue and I'm willing to add a sentence for that case.
The text was updated successfully, but these errors were encountered: