-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks from dashmap and hashbrown #43
Conversation
For licenses, I think the right thing to do is to include the license file from the repositories you copied from into It would also be good to include a short comment at the top of each copied file saying where it was copied from. Separately, I don't know that we need to pull in the benchmarks for other maps into here. I would also like us to have separate benchmark files for the flurry version of different benchmarks. I'm imagining |
Wouldn't we want to compare our results against the other hashmaps? Of course one can run their benchmarks separately, I just thought it would be nice to have them available without needing separate projects. For comparing performance across releases, one could still execute only the Regarding the license, I will update the branch accordingly. I can also separate |
We do want to compare against other hashmaps, and while it is convenient to copy the benchmarks from there to here, I don't think it's worth it. Better to avoid keeping multiple copies, and instead just refer people to where the master copy of the benchmarks for the other backends live. |
Have you tried actually running the benchmarks? I'm curious what kind of numbers you get and how they compare? |
They seem... slow? Inserting in particular is way slower than the other maps. Basically...
So yeah, overall |
It's worth noting that I ran this on my personal machine while doing other tasks, so this is not the cleanest setup. I don't think that this would be responsible for differences of this magnitude. Is there a Rust profiling tool which could tell which part of |
Oh, that is very interesting indeed! Let's land this and then do some profiling. For profiling, my recommendation is to add: [profile.release]
debug = true
[profile.bench]
debug = true to |
I might have to defer this to someone else. I'm on a Windows machine at the moment and will be a little busy the next few weeks anyways. Perhaps it's better to open an issue for this, so someone can really take a good look at this? |
Yes, that makes sense! Want to open an issue and include your preliminary results? |
This addresses #8. The added files have all the original benchmarks from
dashmap
andhashbrown
to compare against (this includes several other hashmap implementations), as well as implementations of thedashmap
- andhashbrown
-specific tests forflurry
. I also tried to add an implementation of thedashmap
tests forflurry
that doesn'tpin()
the epoch for every operation (the original implementation usesrayon
'spar_iter().for_each()
, which requires the executed closure to beSend + Sync
), however this requires setting up the threads differently and I'm not sure how much overhead this introduces. So have this in mind when using this comparison.The main reason this is a draft is that I wanted to make sure about the licenses.
MIT
license, as far as I understand, requires license and copyright notices to be present in derived works. Am I correct in that this is required? If so, where exactly do we add them?