-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(connector): add integration benchmark for nexmark parsing #13073
perf(connector): add integration benchmark for nexmark parsing #13073
Conversation
Signed-off-by: Bugen Zhao <[email protected]>
Signed-off-by: Bugen Zhao <[email protected]>
Signed-off-by: Bugen Zhao <[email protected]>
a5702c3
to
1ca0a80
Compare
// Enable tracing globally. | ||
// | ||
// TODO: we should use `tracing::with_default` to set the dispatch in the scope, | ||
// so that we can compare the performance with/without tracing side by side. | ||
// However, why the global dispatch performs much worse than the scoped one? | ||
dispatch.init(); | ||
|
||
c.bench_function("parse_nexmark", |b| { | ||
b.iter_batched( | ||
make_stream_iter, | ||
|mut iter| iter.next().unwrap(), | ||
BatchSize::SmallInput, | ||
) | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be like this?
// Enable tracing globally. | |
// | |
// TODO: we should use `tracing::with_default` to set the dispatch in the scope, | |
// so that we can compare the performance with/without tracing side by side. | |
// However, why the global dispatch performs much worse than the scoped one? | |
dispatch.init(); | |
c.bench_function("parse_nexmark", |b| { | |
b.iter_batched( | |
make_stream_iter, | |
|mut iter| iter.next().unwrap(), | |
BatchSize::SmallInput, | |
) | |
}); | |
tracing::dispatcher::with_default(&dispatch, || { | |
c.bench_function("parse_nexmark_with_tracing_scoped", |b| { | |
b.iter_batched( | |
make_stream_iter, | |
|mut iter| iter.next().unwrap(), | |
BatchSize::SmallInput, | |
) | |
}) | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in 456cd32, but not sure why. 😄
Signed-off-by: Bugen Zhao <[email protected]>
Signed-off-by: Bugen Zhao <[email protected]>
Codecov Report
@@ Coverage Diff @@
## main #13073 +/- ##
==========================================
- Coverage 68.40% 68.33% -0.07%
==========================================
Files 1498 1498
Lines 252154 252154
==========================================
- Hits 172490 172320 -170
- Misses 79664 79834 +170
Flags with carried forward coverage won't be shown. Click here to find out more. see 21 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Signed-off-by: Bugen Zhao [email protected]I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
To help us investigate #12959.
The "integration" means that we are not only evaluating the performance of parser itself (which is already covered by #9195), but also include the related procedure like tracing, offset maintaining, chunk building and so on, which is just like how we do parsing in production.
Something that made me confused (UPDATE: Resolved but still confused)
risingwave/src/connector/benches/nexmark_integration.rs
Lines 117 to 122 in 1ca0a80
Results:
...which confirms #12959 (comment).
Checklist
./risedev check
(or alias,./risedev c
)Documentation
Release note
If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.