-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify that 'resident set size' != 'size of process' #6
Comments
While I initially agreed with you when I first saw that talk, I'm not sure I do so now. I think I may update the documentation as you say instead, but I'll review your slides and do some testing. When I did all of my original testing, I only found meaningful output (for the cases I was playing with) when I went based on resident set size, rather than process size, virtual size etc. I was more interested in when "malloc" was actually called, which I think is what only RSS can provide me, right? The problem is, as you say, that memory can go down. To solve missing cases where it goes down and back up, I could probably let the last recorded memory be set lower in these cases. |
When malloc is called it will return memory that was previously freed, if possible. They'll be no increase in process memory size in that case. If there's no suitable chunk of free memory for malloc to return then it has to get some more. Traditionally it does that by calling sbrk() and that will increase the process memory size. However the process rss might not change until the process accesses the memory (read or write) at some point later, possibly seconds or minutes later. At that point the o/s has to provide a physical (resident) page at the place that the process accessed the memory. Typically of course the access is immediate so you don't notice. Note that, if the system is low on memory, then the physical (resident) page provided by the o/s may have been stolen from some other process that hasn't used it for a while, after the contents are written out to swap space or back to the filesystem if the page was mmap'd. In this case that RSS of that process goes down. Turn the tables and you can see that, if the situation was reversed, your process could be the one that the o/s is stealing pages from. It's a dance that's happening all the time on systems that are low on memory. On system with spare memory you're unlikely to see this effect. So RSS will appear to work fine most of the time, but it's not measuring what you actually care about and may give less than useful results when the system is under load. (You've probably heard of Linux memory overcommit where you can ask malloc for more memory than is available and it'll happily return you a pointer. It's only when you try to use more and more of those pages that the process grows RSS and then it, or some other process, might get killed by the Out Of Memory mechanism.)
I don't think so. Hopefully it's clear from the above that RSS is simply not a good foundation for what you want. The Linux::Smaps module gives accurate raw data, but is linux only. To get an accurate size of perl memory use you also need to peer inside the malloc library to find how much 'free' space is sitting in there. Without that you've no visibility of the large volume of allocs and frees that don't alter the process size. The point at which the process size grows is just the place where malloc ran out of free space. The allocation that perl asked for at that point might be tiny but malloc, having nothing to give, asks the o/s for a big chunk. Malloc hands a pointer to part of that to perl and perl will touch that bit (thus allocating physical pages and thus rss). After that, further requests for memory from malloc will come from the remaining space of the big chunk that malloc got from the o/s. When they're actually touched by perl you'll see the rss go up because the o/s had to provide a physical page at that point in time. This is something I hadn't really thought through before. Effectively tracking rss let's you "see through" the malloc because malloc isn't actually causing rss to grow. So this behaviour, where rss grows at the point of first use by perl, is what makes rss look like it's a good fit. However, perl may then free some memory which goes back to the malloc free space but doesn't reduce rss. So it's no longer being 'used' but is still part of the process size. That free memory is also probably part of the rss until the o/s needs to steal some pages. That's when it all gets more and more fuzzy. Probably the best way forward is to try to explain the key points of the above, which I'll try to summarize here:
I hope this is of some use. [All the above relates to small allocations. Many malloc's will use mmap and munmap for allocations over a certain size. When those are free'd the memory is returned to the o/s and the process size reduces. It also ignores malloc free space fragmentation.] |
The RSS of a process can be going down at the same time that the process size is increasing.
The docs say "Prints out a message when your program grows in memory" but that's not an accurate statement. It would be more accurate to say "Prints out a message when your program occupies more physical memory, which may change at anytime due to the whims of the operating system and the demands of other processes".
For the likely use-cases for this module I really doubt that you want to be measuring the resident set size. It's only relevant in a few use cases (typically copy-on-write behaviour for forked worker processes). Most users are much more likely to be interested in the total process size.
See http://www.slideshare.net/Tim.Bunce/perl-memory-use-lpw2013 for some more info.
The text was updated successfully, but these errors were encountered: