Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not limit locked memory by default #176

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

nigoroll
Copy link
Member

See varnishcache/varnish-cache#4193 and varnishcache/varnish-cache#4121 for context:

It does not make sense (any more) to apply an external limit on the maximum locked memory: We should trust Varnish-Cache to only attempt mlock(2) where it makes sense, and the amount of VSM grows dynamically with backends, vmods etc, so it is far from trivial to calculate a sensible maximum.

See varnishcache/varnish-cache#4193 and
varnishcache/varnish-cache#4121 for context:

It does not make sense (any more) to apply an external limit on the
maximum locked memory: We should trust Varnish-Cache to only attempt
mlock(2) where it makes sense, and the amount of VSM grows dynamically
with backends, vmods etc, so it is far from trivial to calculate a
sensible maximum.
@dridi
Copy link
Member

dridi commented Oct 7, 2024

My two cents on this topic.

As far as I can remember we always recommended mounting varnish state directory /var/lib/varnish (containing by default the working directories) as a tmpfs partition to have everything in memory out of the box. Since Varnish 7 working directories live in /var/run and it should be a tmpfs out of the box, or a symbolic link to /run that should also be a tmpfs by default.

A tmpfs partition may swap pages in and out unless mounted with the noswap option, and I don't think we should prevent swapping of unused artifacts like code VCLs. On the other hand, we recommend tmpfs precisely because we want our shared memory segments to remain resident.

So I am in favor of not limiting our mlock(2) with the assumption that no rogue VMOD or VEXT is going to abuse it. I think we should also make an effort to study how the UNIX/Linux jail influences our ability to lock memory in RAM. Maybe dropping root privileges also drops the CAP_IPC_LOCK capability and the cache process was never able to effectively mlock() anything?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants