-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update --dry-run output #272
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,4 @@ | ||
import json | ||
import logging | ||
import pathlib | ||
import time | ||
|
@@ -55,13 +56,23 @@ | |
@click.option( | ||
"--dry-run", | ||
"dry_run", | ||
help='Print list of tests to run and options that need to be added to the test runner as a space-separated list to stdout. Format is ATS_TESTS_TO_RUN="<options> <test_1> <test_2> ... <test_n>"', | ||
help=( | ||
"Print list of tests to run AND tests skipped (and options that need to be added to the test runner) to stdout. " | ||
+ "Also prints the same information in JSON format. " | ||
+ "JSON will have keys 'ats_tests_to_run', 'ats_tests_to_skip' and 'runner_options'. " | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think most users (on small-to-medium projects) will see "tests-to-skip" as unecessary noise. Entirely up to you. I'm happy either way. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with you. I think that will eventually be the case. As I mentioned though, while the product is still in beta we are outputting everything to understand what is indeed considered "noise", and what usage patterns arise. Those datapoints will help us define better interfaces in the future. |
||
+ "List of tests to run is prefixed with ATS_TESTS_TO_RUN= " | ||
+ "List of tests to skip is prefixed with ATS_TESTS_TO_SKIP=" | ||
), | ||
is_flag=True, | ||
) | ||
@click.option( | ||
"--dry-run-output-path", | ||
"dry_run_output_path", | ||
help="Prints the dry-run list into dry_run_output_path (in addition to stdout)", | ||
help=( | ||
"Prints the dry-run list (ATS_TESTS_TO_RUN) into dry_run_output_path (in addition to stdout)\n" | ||
+ "AND prints ATS_TESTS_TO_SKIP into dry_run_output_path_skipped\n" | ||
+ "AND prints dry-run JSON output into dry_run_output_path.json" | ||
), | ||
type=pathlib.Path, | ||
default=None, | ||
) | ||
|
@@ -313,18 +324,50 @@ def _dry_run_output( | |
runner: LabelAnalysisRunnerInterface, | ||
dry_run_output_path: Optional[pathlib.Path], | ||
): | ||
labels_to_run = list( | ||
set( | ||
result.absent_labels | ||
+ result.global_level_labels | ||
+ result.present_diff_labels | ||
) | ||
labels_to_run = set( | ||
result.absent_labels + result.global_level_labels + result.present_diff_labels | ||
) | ||
labels_skipped = set(result.present_report_labels) - labels_to_run | ||
# If the test label can contain spaces and dashes the test runner might | ||
# interpret it as an option and not a label | ||
# So we wrap it in doublequotes just to be extra sure | ||
labels_run_wrapped_double_quotes = sorted( | ||
map(lambda l: '"' + l + '"', labels_to_run) | ||
) | ||
labels_skip_wrapped_double_quotes = sorted( | ||
map(lambda l: '"' + l + '"', labels_skipped) | ||
) | ||
|
||
output_as_dict = dict( | ||
runner_options=runner.dry_run_runner_options, | ||
ats_tests_to_run=labels_run_wrapped_double_quotes, | ||
ats_tests_to_skip=labels_skip_wrapped_double_quotes, | ||
) | ||
output = runner.dry_run_runner_options + sorted(labels_to_run) | ||
if dry_run_output_path is not None: | ||
with open(dry_run_output_path, "w") as fd: | ||
fd.write(" ".join(output) + "\n") | ||
click.echo(f"ATS_TESTS_TO_RUN=\"{' '.join(output)}\"") | ||
fd.write( | ||
" ".join( | ||
runner.dry_run_runner_options + labels_run_wrapped_double_quotes | ||
) | ||
+ "\n" | ||
) | ||
with open(str(dry_run_output_path) + "_skipped", "w") as fd: | ||
fd.write( | ||
" ".join( | ||
runner.dry_run_runner_options + labels_skip_wrapped_double_quotes | ||
) | ||
+ "\n" | ||
) | ||
with open(str(dry_run_output_path) + ".json", "w") as fd: | ||
fd.write(json.dumps(output_as_dict) + "\n") | ||
|
||
click.echo(json.dumps(output_as_dict)) | ||
click.echo( | ||
f"ATS_TESTS_TO_RUN={' '.join(runner.dry_run_runner_options + labels_run_wrapped_double_quotes)}" | ||
) | ||
click.echo( | ||
f"ATS_TESTS_TO_SKIP={' '.join(runner.dry_run_runner_options + labels_skip_wrapped_double_quotes)}" | ||
) | ||
Comment on lines
+364
to
+370
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This stdout is nearly but not quite json parseable.
If you choose one of these, the |
||
|
||
|
||
def _fallback_to_collected_labels( | ||
|
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -123,13 +123,19 @@ def test_labelanalysis_help(self, mocker, fake_ci_provider): | |||||||||||||||||||||||||||||
" --max-wait-time INTEGER Max time (in seconds) to wait for the label", | ||||||||||||||||||||||||||||||
" analysis result before falling back to running", | ||||||||||||||||||||||||||||||
" all tests. Default is to wait forever.", | ||||||||||||||||||||||||||||||
" --dry-run Print list of tests to run and options that need", | ||||||||||||||||||||||||||||||
" to be added to the test runner as a space-", | ||||||||||||||||||||||||||||||
" separated list to stdout. Format is", | ||||||||||||||||||||||||||||||
' ATS_TESTS_TO_RUN="<options> <test_1> <test_2>', | ||||||||||||||||||||||||||||||
' ... <test_n>"', | ||||||||||||||||||||||||||||||
" --dry-run-output-path PATH Prints the dry-run list into dry_run_output_path", | ||||||||||||||||||||||||||||||
" (in addition to stdout)", | ||||||||||||||||||||||||||||||
" --dry-run Print list of tests to run AND tests skipped", | ||||||||||||||||||||||||||||||
" (and options that need to be added to the test", | ||||||||||||||||||||||||||||||
" runner) to stdout. Also prints the same", | ||||||||||||||||||||||||||||||
" information in JSON format. JSON will have keys", | ||||||||||||||||||||||||||||||
" 'ats_tests_to_run', 'ats_tests_to_skip' and", | ||||||||||||||||||||||||||||||
" 'runner_options'. List of tests to run is", | ||||||||||||||||||||||||||||||
" prefixed with ATS_TESTS_TO_RUN= List of tests to", | ||||||||||||||||||||||||||||||
" skip is prefixed with ATS_TESTS_TO_SKIP=", | ||||||||||||||||||||||||||||||
" --dry-run-output-path PATH Prints the dry-run list (ATS_TESTS_TO_RUN) into", | ||||||||||||||||||||||||||||||
" dry_run_output_path (in addition to stdout) AND", | ||||||||||||||||||||||||||||||
" prints ATS_TESTS_TO_SKIP into", | ||||||||||||||||||||||||||||||
" dry_run_output_path_skipped AND prints dry-run", | ||||||||||||||||||||||||||||||
" JSON output into dry_run_output_path.json", | ||||||||||||||||||||||||||||||
" -h, --help Show this message and exit.", | ||||||||||||||||||||||||||||||
"", | ||||||||||||||||||||||||||||||
] | ||||||||||||||||||||||||||||||
|
@@ -231,7 +237,7 @@ def test_invoke_label_analysis_dry_run(self, get_labelanalysis_deps, mocker): | |||||||||||||||||||||||||||||
fake_runner = get_labelanalysis_deps["fake_runner"] | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
label_analysis_result = { | ||||||||||||||||||||||||||||||
"present_report_labels": ["test_present"], | ||||||||||||||||||||||||||||||
"present_report_labels": ["test_present", "test_in_diff", "test_global"], | ||||||||||||||||||||||||||||||
"absent_labels": ["test_absent"], | ||||||||||||||||||||||||||||||
"present_diff_labels": ["test_in_diff"], | ||||||||||||||||||||||||||||||
"global_level_labels": ["test_global"], | ||||||||||||||||||||||||||||||
|
@@ -278,9 +284,14 @@ def test_invoke_label_analysis_dry_run(self, get_labelanalysis_deps, mocker): | |||||||||||||||||||||||||||||
print(result.output) | ||||||||||||||||||||||||||||||
assert result.exit_code == 0 | ||||||||||||||||||||||||||||||
assert ( | ||||||||||||||||||||||||||||||
'ATS_TESTS_TO_RUN="--labels test_absent test_global test_in_diff' | ||||||||||||||||||||||||||||||
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}' | ||||||||||||||||||||||||||||||
in result.output | ||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||
assert ( | ||||||||||||||||||||||||||||||
'ATS_TESTS_TO_RUN=--labels "test_absent" "test_global" "test_in_diff"' | ||||||||||||||||||||||||||||||
in result.output | ||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||
assert 'ATS_TESTS_TO_SKIP=--labels "test_present"' in result.output | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
def test_invoke_label_analysis_dry_run_with_output_path( | ||||||||||||||||||||||||||||||
self, get_labelanalysis_deps, mocker | ||||||||||||||||||||||||||||||
|
@@ -289,7 +300,7 @@ def test_invoke_label_analysis_dry_run_with_output_path( | |||||||||||||||||||||||||||||
fake_runner = get_labelanalysis_deps["fake_runner"] | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
label_analysis_result = { | ||||||||||||||||||||||||||||||
"present_report_labels": ["test_present"], | ||||||||||||||||||||||||||||||
"present_report_labels": ["test_present", "test_in_diff", "test_global"], | ||||||||||||||||||||||||||||||
"absent_labels": ["test_absent"], | ||||||||||||||||||||||||||||||
"present_diff_labels": ["test_in_diff"], | ||||||||||||||||||||||||||||||
"global_level_labels": ["test_global"], | ||||||||||||||||||||||||||||||
|
@@ -332,20 +343,38 @@ def test_invoke_label_analysis_dry_run_with_output_path( | |||||||||||||||||||||||||||||
], | ||||||||||||||||||||||||||||||
obj={}, | ||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||
labels_file = Path("ats_output_path") | ||||||||||||||||||||||||||||||
print(result) | ||||||||||||||||||||||||||||||
print(result.output) | ||||||||||||||||||||||||||||||
Comment on lines
+346
to
+347
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe clean the prints up, but there are other tests that leave em in so idc There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The prints only show up if the test fails. In this case they are very useful to debug why the test failed. That's why some tests leave them there |
||||||||||||||||||||||||||||||
ats_output_path = "ats_output_path" | ||||||||||||||||||||||||||||||
labels_file = Path(ats_output_path) | ||||||||||||||||||||||||||||||
skip_labels_file = Path(ats_output_path + "_skipped") | ||||||||||||||||||||||||||||||
json_output = Path(ats_output_path + ".json") | ||||||||||||||||||||||||||||||
assert labels_file.exists() and labels_file.is_file() | ||||||||||||||||||||||||||||||
assert skip_labels_file.exists() and skip_labels_file.is_file() | ||||||||||||||||||||||||||||||
assert json_output.exists() and json_output.is_file() | ||||||||||||||||||||||||||||||
with open(labels_file, "r") as fd: | ||||||||||||||||||||||||||||||
assert fd.readlines() == [ | ||||||||||||||||||||||||||||||
"--labels test_absent test_global test_in_diff\n" | ||||||||||||||||||||||||||||||
'--labels "test_absent" "test_global" "test_in_diff"\n' | ||||||||||||||||||||||||||||||
] | ||||||||||||||||||||||||||||||
with open(skip_labels_file, "r") as fd: | ||||||||||||||||||||||||||||||
assert fd.readlines() == ['--labels "test_present"\n'] | ||||||||||||||||||||||||||||||
with open(json_output, "r") as fd: | ||||||||||||||||||||||||||||||
assert fd.readlines() == [ | ||||||||||||||||||||||||||||||
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}\n' | ||||||||||||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there any reason to doubly-quote the the test names? Presumably there's no test named This looks over-quoted, to me. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We recently had a customer run into trouble because one of their parametrized tests looked something like That's why we are now double quoting all labels. |
||||||||||||||||||||||||||||||
] | ||||||||||||||||||||||||||||||
Comment on lines
+362
to
364
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This represents the intended value more clearly, and should(?) yield better failure messages. It has the added side-effect of ignoring any whitespace changes, which I don't believe you intend to care about in this particlar test.
Suggested change
The included quotes are more clearly pretty weird, here. |
||||||||||||||||||||||||||||||
mock_get_runner.assert_called() | ||||||||||||||||||||||||||||||
fake_runner.process_labelanalysis_result.assert_not_called() | ||||||||||||||||||||||||||||||
print(result.output) | ||||||||||||||||||||||||||||||
assert result.exit_code == 0 | ||||||||||||||||||||||||||||||
assert ( | ||||||||||||||||||||||||||||||
'ATS_TESTS_TO_RUN="--labels test_absent test_global test_in_diff' | ||||||||||||||||||||||||||||||
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}' | ||||||||||||||||||||||||||||||
in result.output | ||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||
assert ( | ||||||||||||||||||||||||||||||
'ATS_TESTS_TO_RUN=--labels "test_absent" "test_global" "test_in_diff"' | ||||||||||||||||||||||||||||||
in result.output | ||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||
assert 'ATS_TESTS_TO_SKIP=--labels "test_present"' in result.output | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
def test_fallback_to_collected_labels(self, mocker): | ||||||||||||||||||||||||||||||
mock_runner = mocker.MagicMock() | ||||||||||||||||||||||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear (to me) what
label-analysis
does on a "wet" run. Add a function docstring or else pass@command(help=...)
to help users understand.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose you are right... possibly we should have not built it this way, but created a new runner to do what the
dry-run
option does. This would reinforce the intended way of using it