You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If a user has a 1GB node and sets memory limits of 100Mi for 10 containers, then that should prevent any new containers from being deployed i.e. the 11th and 12th which would overcommit the memory, and possibly cause an OOM somewhere.
Current Behaviour
You can continue to deploy containers beyond the memory available on the the host
Possible Solution
Read the available memory, minus a set amount of headroom for the system, and limit according to the total memory reserved by all deployed containers.
I.e.
1000MB total
System headroom = 500MB
Available = 1000-500 = 500MB
nodeinfo1 - 250MB
nodeinfo2 - 250MB
Deploying nodeinfo3 with a limit of 100MB will fail due to the sum of (nodeinfo1 - nodeinfo2 - headroom) == 0
The text was updated successfully, but these errors were encountered:
alexellis
changed the title
[Feature
[Feature] Prevent overcommitting memory
Dec 3, 2020
Expected Behaviour
If a user has a 1GB node and sets memory limits of 100Mi for 10 containers, then that should prevent any new containers from being deployed i.e. the 11th and 12th which would overcommit the memory, and possibly cause an OOM somewhere.
Current Behaviour
You can continue to deploy containers beyond the memory available on the the host
Possible Solution
Read the available memory, minus a set amount of headroom for the system, and limit according to the total memory reserved by all deployed containers.
I.e.
1000MB total
System headroom = 500MB
Available = 1000-500 = 500MB
nodeinfo1 - 250MB
nodeinfo2 - 250MB
Deploying nodeinfo3 with a limit of 100MB will fail due to the sum of (nodeinfo1 - nodeinfo2 - headroom) == 0
The text was updated successfully, but these errors were encountered: