-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup function not run per sample, but per batch #3
Comments
Hello 😊 The problem with that is that the amount of cycles are dynamic depending on the milliseconds that you want to spend per sample (100 milliseconds and 50 samples by default, so a minimum of 5 seconds will be spent per test). They could be millions of cycles on fast tests. It seems that you are trying to create the cycles yourself. There's no need. Just do that single operation that you want to test and the tool will work on the cycles to fit the minimum time needed to gather a sample. For example, if you have a regex to test with 10 different inputs, run that regex with the 10 different inputs and the tool will increase the cycles so it runs repeatedly for 100 ms, and then will calculate the amount of operations per second based on those cycles and time spent. It will also restart the process from time to time to "reset" the memory, do warm up runs and so on to gather clean samples. Can you, please, share your test? So I can see real use cases so I can add more features if needed or maybe help you. PS: Yes, I have to work on the documentation to explain more clearly how it works under the hood. |
I was trying to benchmark performance of IterableIterators, I can't create them before hand, and if I create them during the test then I'm measuring both the performance of the iterator and the performance of it's creation. The test I was running this time is basically: import { IsoBench } from "iso-bench";
function* createIterable(count) {
for (let i = 0; i < count; i++) {
yield i;
}
}
let i = 0;
await new IsoBench("generated iterable")
.add("while", (iterable) => {
let next = iterable.next();
while (!next.done) {
next = iterable.next();
i = i ^ next.value!;
}
}, () => createIterable(100000))
.add("do while", (iterable) => {
let next;
do {
next = iterable.next();
i = i ^ next.value!;
}
while(!next.done);
}, () => createIterable(100000))
.add("for ... of", (iterable) => {
for(const value of iterable) {
i = i ^ value;
}
}, () => createIterable(100000))
.consoleLog().run(); I was surprised because I was getting really strange results. But that's because the iterable had been exhausted by the first sample of the batch and was doing nothing for all of the other samples. I'm not really sure how else to benchmark this without the ability to create pristine This works perfectly, though, if you use my modified |
@webstrand but the test that you shared is working perfectly in the latest Isobench runtime, with no modifications: Can you share your output? Maybe there's a bug depending on the platform or node version or something. Kinda weird. PD: Quite interesting the result of the test also, tbh. Nice catch. |
Actually the output of that test is entirely bogus. After the first execution of the test, the iterator object returned by For instance compare the performance of: .add("for ... of", (iterable) => {
for(const value of iterable) {
i = i ^ value;
}
}, () => Array.from({ length: 100000 }, (_,i) => i)) which appears to be much much slower, but is actually re-creating the iterator with each execution of the test. This means the loop body is actually executing. Effectively, my original benchmark is measuring "how much time does it take for javascript to realize the iterator is already |
My results prior to patching
My results after patching
My patch to The following benchmark behaves the same whether import { IsoBench } from "iso-bench";
function* createIterable(count) {
for (let i = 0; i < count; i++) {
yield i;
}
}
let i = 0;
const iter = createIterable(100000);
await new IsoBench("generated iterable")
.add("while", (iterable) => {
let next = iterable.next();
while (!next.done) {
next = iterable.next();
i = i ^ next.value!;
}
}, () => iter)
.add("do while", (iterable) => {
let next;
do {
next = iterable.next();
i = i ^ next.value!;
}
while(!next.done);
}, () => iter)
.add("for ... of", (iterable) => {
for(const value of iterable) {
i = i ^ value;
}
}, () => iter)
.consoleLog().run(); because the |
@webstrand OOOOH I GET IT!!! Sorry for the brainfart :-( Yes, you are right. I will work something with that. The problem with moving the performance.now() method is that each time that you call it, it adds a bit of false time to the test, as discussed here: #2 (comment) so is better to call it the least amount of times and then divide by the number of cycles. I will think something. Maybe use an array as you said the first time, but instead of using the cycles which are dynamic, use a static length array, so you can prepare a proper setup. I'll come up with something that doesn't add extra false time to the test but also doesn't reutilizes the setup to avoid exhausting/optimizations. Thank you for the issue, is a really good catch :-) |
I was confused by the documentation and assumed the setup function would run prior to each run of the test function. But instead it runs per batch, which could be of variable size, so I can't prepare
n
objects for the test function to operate on.Maybe something like this would be effective?
Since you don't want to include the setup function in the time measurement, and to avoid introducing extra error you need to avoid calling performance.now at the beginning and end of every batch.
Or I suppose you could pass the count into the setup function and the cycle into the test function, and let the user take care of building and selecting from the array?
If nothing else, making the documentation clearer would be nice.
perhaps?
I really appreciate your library, thank you for releasing it.
The text was updated successfully, but these errors were encountered: