-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test results more consistently #74
Comments
dead code elimination is taking place. The idea of this repository is to compare two versions of Node.js performing pretty much the same operation. If version X contains a dead-code elimination (noop) and X+1 doesn't, it should inform. The percentage is irrelevant in microbenchmarking. If we include operations to remove dead code elimination (for instance, The idea is pretty simple, these operations have changed in Node.js 22 for that specific and unrealistic workload. |
I see, yeah I'm not disagreeing, it actually fulfilled its purpose by showing that change But I wasn't alluding to Small modifications can get us there: |
I agree! I also suggest moving to another benchmark tool. |
IMO the problem is that when/if dead-code-elimination is at play, the current architecture for benchmarks does not measure the operation. If my application does millions of |
Is there a way to see if the code is reaching dead-code elimination? I think it's fair to consider all code to be optimized by V8 (Maglev/Turbofan). We should only make sure that we are not comparing a cc: @joyeecheung |
With TurboFan could figure it out by doing To avoid dead code elimination IMO the best way is just to make your benchmarks more sophisticated. If it's too simple and unrealistic you are basically trying to outsmart the optimizing compilers so that they fail to notice that the code isn't doing anything even though a human can figure out it's not doing anything. But optimizing compilers are designed to be smart enough for this (at least Turbofan is). Maglev is dumber because it tries to cut corners in optimization under the assumption that most code doesn't need aggressive optimizations, which can be expensive on their own, while compilation speed of said code may already play a more important role in the overall performance. In the real world, a lot of code doesn't actually get hot enough to be worthy of expensive optimizations. Trying to measure how they perform when they are hot may have already been missing the point if the user code doing the pattern isn't hot enough and will only ever be handled by the interpreter or the baseline compiler/mid-tier optimizing compiler, and the compilation of the code takes more time in the application than actually executing said code (which can happen more frequently in CLIs). |
Could be fixed by: #256. |
Thank you so much! What do you think of assigning a result of an operation to a global var btw? It seems to fix a lot of the code elimination issues for me when benching Something like: let result = "";
const bench = new Bench({ time: 500 });
bench.add("simple HTML formatting", () => {
result = html`<div>Hello, world!</div>`;
});
bench.add("null and undefined expressions", () => {
result = html`<p>${null} and ${undefined}</p>`;
}); |
It's not guaranteed that it will fix code elimination, other optimizations might take place. I was discussing at v8-mail and a feasible (but not realistic way) is to use |
I'll analyse the results of https://github.com/RafaelGSS/nodejs-bench-operations/commits/main/?before=b70c29c9c19936e25c7a521682ca52af49390f0f+35&after=b70c29c9c19936e25c7a521682ca52af49390f0f and compare with other tools. |
nodejs/performance#166 (comment) suggests that some results are unreliable, if dead code elimination is indeed taking place, we're not getting correct results for the operations
We should investigate and find a way to get results that reflect reality
The text was updated successfully, but these errors were encountered: