-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Support for cargo nextest #3920
Conversation
Apologies for the delay, I was on vacation, still catching up. |
That's okay! |
Well its working, actually its my second version working, the fist one used locks. But according to the the library documentation it shouldn't work (at least cross-platform):
Anyway, the overhead was as bad as I expected, as each test execution now requires the same overhead as a complete assembly, I only tested on firefox so far, about 12-13s each, thermal throttled. But when tests take longer than the overhead, the extra cores start to payoff. I'm having thermal throttling issues, but my speed boost might be around 3x when I have the tests configured for very heavy parameters, its a property based testing variation. |
I was waiting for your review, because as I do more tests, I end up finding more stuff to fix, making the review harder on you. Sorry. I ended up fixing support for deno, not sure why, but it was for sure broken. |
@daxpedda I have been trying to create a macro to simplify the tests, this will be particular useful for tests that should be run over the different runtimes supported. I'm still working on the syntax, as the macro by itself, is allowing different usage patterns. But right now, this is the format already working: feature! {
given_there_is_an_assembly_with_one_failing_test();
when_wasm_bindgen_test_runner_is_invoked_with_the_option("-V");
"Outputs the version" {
then_the_standard_output_should_have(
&format!("wasm-bindgen-test-runner {}", env!("CARGO_PKG_VERSION")),
);
}
"Returns success" {
then_success_should_have_been_returned();
}
} It expands to two tests #[test]
fn outputs_the_wasm_bindgen_test_runner_version_information_feature() {
let mut context = Context::new();
given_there_is_an_assembly_with_one_failing_test(&mut context);
when_wasm_bindgen_test_runner_is_invoked_with_the_option(&mut context, "-V");
then_the_standard_output_should_have(
&context,
&format!("wasm-bindgen-test-runner {}", env!("CARGO_PKG_VERSION")),
);
}
#[test]
fn returns_success_feature() {
let mut context = Context::new();
given_there_is_an_assembly_without_anything(&mut context);
when_wasm_bindgen_test_runner_is_invoked_with_the_option(&mut context, "-V");
then_success_should_have_been_returned(&context);
} If the target platform is wasm it uses [wasm_bindgen_test::wasm_bindgen_test] instead. This allows for a more compact file, because there are sometimes 5-7 different outcomes for a single execution context The idea is by default to respect the single outcome, allowing easy troubleshooting of regressions, but its possible to aggregate the executions on CI for faster execution times. |
…est summary to invocation with_an_assembly without_arguments level_1 with_one_successful_test.
…ation with_an_assembly without_arguments level_1 with_one_successful_test.
…st to invocation with_an_assembly without_arguments level_1 with_one_successful_test.
…cation with_an_assembly without_arguments level_1 with_one_successful_test.
…t summary to invocation with_an_assembly without_arguments level_1 with_one_successful_test.
…est summary to invocation with_an_assembly without_arguments level_2 with_one_successful_test.
…st to invocation with_an_assembly without_arguments level_2 with_one_successful_test.
…cation with_an_assembly without_arguments level_2 with_one_successful_test.
…t summary to invocation with_an_assembly without_arguments level_2 with_one_successful_test.
…ation with_an_assembly without_arguments level_2 with_one_successful_test.
…mbly with_arguments --list --format terse default tests into level_0.
…mbly level_0 without_tests to parent.
…he tests to not wasm only.
…te module path and the ignore information.
… to support --list --format terse and --list --format terse --ignored.
…terse format to the invocation with_an_assembly with_arguments --list --format terse.
…terse format to the invocation with_an_assembly with_arguments --list --format terse default level_2.
…nvocation with_an_assembly with_arguments --list --format terse default level_1.
…nvocation with_an_assembly with_arguments --list --format terse default level_2.
…d to use the ResourceCoordinator.
@daxpedda I understand your reasoning, and to be honest, I'm trying to narrow things to make it easier for you to review. The problem is that cargo nextest stresses wasm-bindgen-test a lot, in my repo, 771 times, that means that all the instability issues pop up. Anyway for now, I'm not being able to trigger issues with any of the supported runtimes... I was finally able to remove the hacks I added to get cargo nextest working, because of the shared directory, using a ResourceCoordinator. I just have some minor things, then I'm going to let you take the lead on this and I can create as much PRs as necessary. |
…use clap parsed run arguments.
…ly emit a single function.
…n_test_runner_env_set from wasm-bindgen_test_runner_command.
… by cargo nextest.
…est to reference the compiled assembly.
…ndgen-test-runner was compiled already.
…ackage directory.
…gen-test-runner is no longer necessary.
This looks great! Has it stalled? |
@ifiokjr No, its done. Although now I have some merge conflicts to solve. I just have been working on a way to get the tests to be more intuitive, to see if it eases the merging. I just didn't pushed it, because it didn't seem to be a priority for anyone and I would break the existing tests, but if its a priority for you, I can try to speed up things... |
Done in #4356. |
This is still work in progress:
https://nexte.st/book/custom-test-harnesses.html
Progress
Updated to use clap
Updated the macro wasm_bindgen_test
Testing
I wasn't sure what was the best place to put the tests in, because of the custom test runner in the cli crate, so I placed them on main tests folder.
I used a variation of BDD that I use for many years now.
-- Although it seems a bit more verbose, it makes writing tests a lot simpler and faster, anyone can understand them and add new ones.
-- When something breaks its very easy to understand why.
-- Although conter-intuitive, in practice I have found that they are a lot easier to update on refactors.
Overhead
Architecture:
-- a folder with the CLI and environment stuff
-- a folder with the wasm handling
-- a folder with the runtimes