-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison benchmarks and comparison in Readme #113
base: main
Are you sure you want to change the base?
Conversation
🚀 Deployed on https://648459faad38cc4b921d3e23--ringbuffer.netlify.app |
let mut vd = Vec::with_capacity(CAP); | ||
|
||
b.iter(|| { | ||
for i in 0..ITER { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we push until CAP
and then start doing remove & push, to have the number of elements stay on CAP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove from front push on back specifically as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, I guess that makes sure that the first iteration is not an outlier
Co-authored-by: Victor <[email protected]>
let _ = rb.push(i); | ||
black_box(()); | ||
} | ||
for i in 0..ITER { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing this more than CAP amount of time is just a no-op right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sort of, the buffer has to reject the items all the rest of the times which might take time just as our dropping of old items does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry that's for the heapless version. Here it will drop old items ITER-CAP times
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just kinda confused why we are testing different things for different structures, shouldn't we test all under the use case of:
A buffer which can contain at max CAP items, removing old entries as new ones get pushed. Clearing out the buffer at the end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no other datastructure has that behavior. They're different datastructures, we cannot test them under the same circumstances
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the comparison is meaningless imo, it should be about using similar datastructures for the same purpose. I think we can emulate ringbuffer behaviour on most if not all of these datastructures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. No benchmark can completely fairly compare these datastructures, but it does give a good overview of differences in order of magnitude. Of course a channel is not a ringbuffer, but pushing 2^14 items to a channel takes about 14 times longer than to a ringbuffer which deals with removing old items. Simply pushing 2^14 items to a vecdeque that has to deal with growing is about 3x slower than pushing to a ringbuffer. Pushing to a heapless deque which drops items on insert is a few percent slower than pushing to a ringbuffer which deletes old items.
None of them are the same datastructure, but it does show the order of magnitude of differences you can expect when using these datastructures. Each datastructure is good at different things, but this is their normal performance when you use them as a queue.
In fact, I don't really want to modify these datastructures to function identically to ringbuffer. Then we don't measure how these datastructures normally perform as a queue for the purpose they were designed for. Instead we measure how well we can turn the datastructure into a ringbuffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as we add a disclaimer/description of the benchmarks in the readme this is fine then I think
Based on #112