You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A clear and concise description of what the bug is.
If you use a very large CIDR range with a very small node slice size i.e. /8 and /27 then there is so many possible slices that they are too large to fit within a kubernetes object with max 1.5MB object size.
Expected behavior
A clear and concise description of what you expected to happen.
We should validate this or surface this error at the onset that it's not supported. A /8 and a /27 would support 524,288 which is way beyond official 5,000 node support. Whereabouts should support more than the base 5,000 but I don't think we need to support 500,000.
To Reproduce
Steps to reproduce the behavior:
Use Node Slice Pool
Create NAD with ip_range of /8 and node_slice_size of /27
Environment:
Whereabouts version : latest main, unreleased yet
Kubernetes version (use kubectl version): any
Network-attachment-definition: N/A
Whereabouts configuration (on the host): N/A
OS (e.g. from /etc/os-release): N/A
Kernel (e.g. uname -a): N/A
Others: N/A
Additional info / context
Add any other information / context about the problem here.
My team can work on a resolution for this. Initially thinking we should just surface this as an error and document the supported limits.
The text was updated successfully, but these errors were encountered:
Describe the bug
A clear and concise description of what the bug is.
If you use a very large CIDR range with a very small node slice size i.e. /8 and /27 then there is so many possible slices that they are too large to fit within a kubernetes object with max 1.5MB object size.
Expected behavior
A clear and concise description of what you expected to happen.
We should validate this or surface this error at the onset that it's not supported. A /8 and a /27 would support 524,288 which is way beyond official 5,000 node support. Whereabouts should support more than the base 5,000 but I don't think we need to support 500,000.
To Reproduce
Steps to reproduce the behavior:
Environment:
kubectl version
): anyuname -a
): N/AAdditional info / context
Add any other information / context about the problem here.
My team can work on a resolution for this. Initially thinking we should just surface this as an error and document the supported limits.
The text was updated successfully, but these errors were encountered: