You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we keep deploying new images, cluster nodes keep pilling up Docker images / layers on its filesystem.
This is not automatically deleted since it is hard to know for the kubelet when an image is not needed anymore and when it might be needed again few moments later.
In #163 (comment), we've setup necessary configuration to cleanup services node automatically: if configuration works as expected, when disk usage goes above 80%, k8s will clean unused image until disk usage goes below 50%.
services node is now getting close to the point where it will use again more than 80% of its disk, so we are close to being able to confirm things work as expected.
Once done, we need to deploy this configuration on all k8s cluster nodes, and add it to the k8s nodes deployment procedure.
Tasks:
confirm that settings are working as expected on services node
deploy settings on all k8s nodes
document the settings for new k8s nodes in the wiki
The text was updated successfully, but these errors were encountered:
Since we keep deploying new images, cluster nodes keep pilling up Docker images / layers on its filesystem.
This is not automatically deleted since it is hard to know for the kubelet when an image is not needed anymore and when it might be needed again few moments later.
In #163 (comment), we've setup necessary configuration to cleanup
services
node automatically: if configuration works as expected, when disk usage goes above 80%, k8s will clean unused image until disk usage goes below 50%.services
node is now getting close to the point where it will use again more than 80% of its disk, so we are close to being able to confirm things work as expected.Once done, we need to deploy this configuration on all k8s cluster nodes, and add it to the k8s nodes deployment procedure.
Tasks:
services
nodeThe text was updated successfully, but these errors were encountered: