perf: give a major rework on how our deleting works #54
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pr improves accumulator deletion significantly in a few ways.
Our deleting code had some major bottlenecks, like calling calculate
hashes twice (one for the old roots and one for updated ones) and
filtering the nodes that are in the proof, which was causing a linear
search on proof_positins for each element in nodes.
This commit fixes that by:
both updated and current roots, so we don't need to loop twice. In
the future, we could even use vectorized computation of sha512_256
two further speed things up.
calculate_nodes_delete. This will waste a little memory as nodes in
the proofs are returned, but won't cause a new allocation as we are
returning nodes as is.
work when checking or updating proofs.
After running the new code for two days with floresta, the speedup is
clear, with calculate_nodes_delete taking less than 2% of the CPU time
for block validation. Before this path, it was using >40%. Here's a before and after