-
-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Split reflists to share their contents across snapshots #1235
Conversation
In some local tests w/ a slowed down filesystem, this massively cut down on the time to clean up a repository by ~3x, bringing a total 'publish update' time from ~16s to ~13s. Signed-off-by: Ryan Gonzalez <[email protected]>
When merging reflists with ignoreConflicting set to true and overrideMatching set to false, the individual ref components are never examined, but the refs are still split anyway. Avoiding the split when we never use the components brings a massive speedup: on my system, the included benchmark goes from ~1500 us/it to ~180 us/it. Signed-off-by: Ryan Gonzalez <[email protected]>
The cleanup phase needs to list out all the files in each component in order to determine what's still in use. When there's a large number of sources (e.g. from having many snapshots), the time spent just loading the package information becomes substantial. However, in many cases, most of the packages being loaded are actually shared across the sources; if you're taking frequent snapshots, for instance, most of the packages in each snapshot will be the same as other snapshots. In these cases, re-reading the packages repeatedly is just a waste of time. To improve this, we maintain a list of refs that we know were processed for each component. When listing the refs from a source, only the ones that have not yet been processed will be examined. Some tests were also added specifically to check listing the files in a component. With this change, listing the files in components on a copy of our production database went from >10 minutes to ~10 seconds, and the newly added benchmark went from ~300ms to ~43ms. Signed-off-by: Ryan Gonzalez <[email protected]>
Reflists are basically stored as arrays of strings, which are quite space-efficient in MessagePack. Thus, using zero-copy decoding results in nice performance and memory savings, because the overhead of separate allocations ends up far exceeding the overhead of the original slice. With the included benchmark run for 20s with -benchmem, the runtime, memory usage, and allocations go from ~740us/op, ~192KiB/op, and 4100 allocs/op to ~240us/op, ~97KiB/op, and 13 allocs/op, respectively. Signed-off-by: Ryan Gonzalez <[email protected]>
The output doesn't actually depend on the reflists, and loading them for every published repo starts to take substantial time and memory. Signed-off-by: Ryan Gonzalez <[email protected]>
The previous reflist logic would early-exit the loop body if one of the lists was empty, but that skips the compacting logic entirely. Instead of doing the early-exit, we can leave a list's ref as nil when the list end is reached and then flip the comparison result, which will essentially treat it as being greater than all others. This should preserve the general behavior without omitting the compaction. Signed-off-by: Ryan Gonzalez <[email protected]>
Thanks for the analysis and solution ! I will give it a try on my side... |
Could you fix the golang-ci-lint errors ? You can see them by running: Then the pipeline passes further :) |
0d79867
to
02a7454
Compare
@neolynx whoops didn't notice those, fixed now! |
02a7454
to
15e2200
Compare
Now we have some tests failing: https://github.com/aptly-dev/aptly/actions/runs/7558329700/job/20579601126?pr=1235#step:10:5192 not sure this is related... could you maybe check what this is ? |
ah I believe those are because the source apt repos themselves changed, so now the mirrored packages also changed. I think I've seen that test fixed before via manual updates...not sure if you'd like something like that in this PR or elsewhere. |
In current aptly, each repository and snapshot has its own reflist in the database. This brings a few problems with it: - Given a sufficiently large repositories and snapshots, these lists can get enormous, reaching >1MB. This is a problem for LevelDB's overall performance, as it tends to prefer values around the confiruged block size (defaults to just 4KiB). - When you take these large repositories and snapshot them, you have a full, new copy of the reflist, even if only a few packages changed. This means that having a lot of snapshots with a few changes causes the database to basically be full of largely duplicate reflists. - All the duplication also means that many of the same refs are being loaded repeatedly, which can cause some slowdown but, more notably, eats up huge amounts of memory. - Adding on more and more new repositories and snapshots will cause the time and memory spent on things like cleanup and publishing to grow roughly linearly. At the core, there are two problems here: - Reflists get very big because there are just a lot of packages. - Different reflists can tend to duplicate much of the same contents. *Split reflists* aim at solving this by separating reflists into 64 *buckets*. Package refs are sorted into individual buckets according to the following system: - Take the first 3 letters of the package name, after dropping a `lib` prefix. (Using only the first 3 letters will cause packages with similar prefixes to end up in the same bucket, under the assumption that packages with similar names tend to be updated together.) - Take the 64-bit xxhash of these letters. (xxhash was chosen because it relatively good distribution across the individual bits, which is important for the next step.) - Use the first 6 bits of the hash (range [0:63]) as an index into the buckets. Once refs are placed in buckets, a sha256 digest of all the refs in the bucket is taken. These buckets are then stored in the database, split into roughly block-sized segments, and all the repositories and snapshots simply store an array of bucket digests. This approach means that *repositories and snapshots can share their reflist buckets*. If a snapshot is taken of a repository, it will have the same contents, so its split reflist will point to the same buckets as the base repository, and only one copy of each bucket is stored in the database. When some packages in the repository change, only the buckets containing those packages will be modified; all the other buckets will remain unchanged, and thus their contents will still be shared. Later on, when these reflists are loaded, each bucket is only loaded once, short-cutting loaded many megabytes of data. In effect, split reflists are essentially copy-on-write, with only the changed buckets stored individually. Changing the disk format means that a migration needs to take place, so that task is moved into the database cleanup step, which will migrate reflists over to split reflists, as well as delete any unused reflist buckets. All the reflist tests are also changed to additionally test out split reflists; although the internal logic is all shared (since buckets are, themselves, just normal reflists), some special additions are needed to have native versions of the various reflist helper methods. In our tests, we've observed the following improvements: - Memory usage during publish and database cleanup, with `GOMEMLIMIT=2GiB`, goes down from ~3.2GiB (larger than the memory limit!) to ~0.7GiB, a decrease of ~4.5x. - Database size decreases from 1.3GB to 367MB. *In my local tests*, publish times had also decreased down to mere seconds but the same effect wasn't observed on the server, with the times staying around the same. My suspicions are that this is due to I/O performance: my local system is an M1 MBP, which almost certainly has much faster disk speeds than our DigitalOcean block volumes. Split reflists include a side effect of requiring more random accesses from reading all the buckets by their keys, so if your random I/O performance is slower, it might cancel out the benefits. That being said, even in that case, the memory usage and database size advantages still persist. Signed-off-by: Ryan Gonzalez <[email protected]>
15e2200
to
d61a8d8
Compare
replaced by #1282 |
This builds on top of #1222, #1227, and #1233, and is thus in draft state until those are merged in. The only commit that's actually new is the very last one, whose commit message I copied below. (Even that single commit is admittedly quite big, but a sizable chunk of the changes are just plumbing a new
RefListCollection
around across the code.)Description of the Change
In current aptly, each repository and snapshot has its own reflist in
the database. This brings a few problems with it:
get enormous, reaching >1MB. This is a problem for LevelDB's overall
performance, as it tends to prefer values around the confiruged block
size (defaults to just 4KiB).
full, new copy of the reflist, even if only a few packages changed.
This means that having a lot of snapshots with a few changes causes
the database to basically be full of largely duplicate reflists.
loaded repeatedly, which can cause some slowdown but, more notably,
eats up huge amounts of memory.
time and memory spent on things like cleanup and publishing to grow
roughly linearly.
At the core, there are two problems here:
Split reflists aim at solving this by separating reflists into 64
buckets. Package refs are sorted into individual buckets according to
the following system:
lib
prefix. (Using only the first 3 letters will cause packages with
similar prefixes to end up in the same bucket, under the assumption
that packages with similar names tend to be updated together.)
relatively good distribution across the individual bits, which is
important for the next step.)
buckets.
Once refs are placed in buckets, a sha256 digest of all the refs in the
bucket is taken. These buckets are then stored in the database, split
into roughly block-sized segments, and all the repositories and
snapshots simply store an array of bucket digests.
This approach means that repositories and snapshots can share their
reflist buckets. If a snapshot is taken of a repository, it will have
the same contents, so its split reflist will point to the same buckets
as the base repository, and only one copy of each bucket is stored in
the database. When some packages in the repository change, only the
buckets containing those packages will be modified; all the other
buckets will remain unchanged, and thus their contents will still be
shared. Later on, when these reflists are loaded, each bucket is only
loaded once, short-cutting loaded many megabytes of data. In effect,
split reflists are essentially copy-on-write, with only the changed
buckets stored individually.
Changing the disk format means that a migration needs to take place, so
that task is moved into the database cleanup step, which will migrate
reflists over to split reflists, as well as delete any unused reflist
buckets.
All the reflist tests are also changed to additionally test out split
reflists; although the internal logic is all shared (since buckets are,
themselves, just normal reflists), some special additions are needed to
have native versions of the various reflist helper methods.
In our tests, we've observed the following improvements:
GOMEMLIMIT=2GiB
, goes down from ~3.2GiB (larger than the memorylimit!) to ~0.7GiB, a decrease of ~4.5x.
In my local tests, publish times had also decreased down to mere
seconds but the same effect wasn't observed on the server, with the
times staying around the same. My suspicions are that this is due to I/O
performance: my local system is an M1 MBP, which almost certainly has
much faster disk speeds than our DigitalOcean block volumes. Split
reflists include a side effect of requiring more random accesses from
reading all the buckets by their keys, so if your random I/O
performance is slower, it might cancel out the benefits. That being
said, even in that case, the memory usage and database size advantages
still persist.
Checklist
AUTHORS
It would be awesome if anyone could also test this out and report how it affects their performance & memory usage.