-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test/scale
crate out of date
#1105
Comments
@Fi3 can you provide clarity on |
Just spoke with @darricksee and explained the following: The test simply outputs the time spent sending 1,000,000 The supported run flags are:
For example, to run with 4 hops and encryption on: cargo run --release -- -h 4 -e
This @Fi3, given this context, I think this could be something useful to update and keep around. I like the idea of having some checks to make sure we are not making changes that hurt performance, before we get too far in solidifying those changes. Thinking about performance monitoring also made me wonder if we should also be using something like Now that we have a MVP, and as we move onto the refactor and optimization phase, I think we need to start being more mindful of code performance. @GitGab19, any thoughts? |
Code performance is definitely something we need to care about. Regarding the feature about having checks on every PR to ensure we do not introduce regressions, that's the reason behind run-benchmarks.yaml, track-benchmarks, and run-and-track-benchmarks-on-main.yaml. They use But it seems something is not working properly (as described by #1051); in addition to this, I think benches defined there are incomplete and not that much helpful at the end. @Fi3 I don't know if you agree with me on this. To summarize, I think that some kind of tests like this one could me more helpful than what we have now, so I'm up for reconsidering the way we are checking our code performance 👍 |
Also, we currently don't know how our codebase scale at all, since we never tested it with more than 2 machines pointed to the translator. So I strongly believe we should put our focus here. I think that benchmarking-tool can also help with this, depending on how strongly it will be used |
ok, I just put this issue on 1.2.0 milestone so we can address this in the future we should eventually have a call to discuss the scope of the work that is needed here before we start getting our hands dirty |
Background
stratum/test
contains three crates:config/
: Contains each role's toml config for themessage-generator
test stack.message-generator
: Generates messages to test SRI.scale
: Outputs the time spent sending 1,000,000SubmitSharesStandard
throughout the system. It contains amain.rs
binary, which means it will generate aCargo.lock
when built. ThiWhen working on enforcing no changes to
Cargo.lock
in #1039 (also related to #1044 and #1102), each crate was investigated to see which contains amain.rs
to enforce the no change in lock files during thepre-push
call toscripts/clippy-fmt-and-test.sh
.Problem
It is unclear exactly how/when
scale
should be used. When runningcargo build
intest/scale
, a versioning error on thenetwork_helpers_sv2
is encountered. This indicates to me that this crate is very out of date and is not really used.The only reference to the
scale
crate is within thetest/scale
directory itself. Perhaps this indicates accumulated dust and should just be removed.Furthermore, in the
scripts/clippy-fmt-and-test.sh
, thetest
directory containing thescale
crate is not included in these checks. If we keep thistest/scale
crate, should we include it in thescripts/clippy-fmt-and-test.sh
?Solution
Understand what
test/scale
is used for. If it is needed, update it so it runs properly. If it is not needed, remove it and all references to it.The text was updated successfully, but these errors were encountered: