Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update --dry-run output #272

Merged
merged 1 commit into from
Sep 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 54 additions & 11 deletions codecov_cli/commands/labelanalysis.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import json
import logging
import pathlib
import time
Expand Down Expand Up @@ -55,13 +56,23 @@
@click.option(
"--dry-run",
"dry_run",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear (to me) what label-analysis does on a "wet" run. Add a function docstring or else pass @command(help=...) to help users understand.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose you are right... possibly we should have not built it this way, but created a new runner to do what the dry-run option does. This would reinforce the intended way of using it

help='Print list of tests to run and options that need to be added to the test runner as a space-separated list to stdout. Format is ATS_TESTS_TO_RUN="<options> <test_1> <test_2> ... <test_n>"',
help=(
"Print list of tests to run AND tests skipped (and options that need to be added to the test runner) to stdout. "
+ "Also prints the same information in JSON format. "
+ "JSON will have keys 'ats_tests_to_run', 'ats_tests_to_skip' and 'runner_options'. "
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think most users (on small-to-medium projects) will see "tests-to-skip" as unecessary noise.
I'd expect that it would be added to the output on a (default false) option; --show-tests-skipped, perhaps.

Entirely up to you. I'm happy either way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you. I think that will eventually be the case.

As I mentioned though, while the product is still in beta we are outputting everything to understand what is indeed considered "noise", and what usage patterns arise. Those datapoints will help us define better interfaces in the future.

+ "List of tests to run is prefixed with ATS_TESTS_TO_RUN= "
+ "List of tests to skip is prefixed with ATS_TESTS_TO_SKIP="
),
is_flag=True,
)
@click.option(
"--dry-run-output-path",
"dry_run_output_path",
help="Prints the dry-run list into dry_run_output_path (in addition to stdout)",
help=(
"Prints the dry-run list (ATS_TESTS_TO_RUN) into dry_run_output_path (in addition to stdout)\n"
+ "AND prints ATS_TESTS_TO_SKIP into dry_run_output_path_skipped\n"
+ "AND prints dry-run JSON output into dry_run_output_path.json"
),
type=pathlib.Path,
default=None,
)
Expand Down Expand Up @@ -313,18 +324,50 @@ def _dry_run_output(
runner: LabelAnalysisRunnerInterface,
dry_run_output_path: Optional[pathlib.Path],
):
labels_to_run = list(
set(
result.absent_labels
+ result.global_level_labels
+ result.present_diff_labels
)
labels_to_run = set(
result.absent_labels + result.global_level_labels + result.present_diff_labels
)
labels_skipped = set(result.present_report_labels) - labels_to_run
# If the test label can contain spaces and dashes the test runner might
# interpret it as an option and not a label
# So we wrap it in doublequotes just to be extra sure
labels_run_wrapped_double_quotes = sorted(
map(lambda l: '"' + l + '"', labels_to_run)
)
labels_skip_wrapped_double_quotes = sorted(
map(lambda l: '"' + l + '"', labels_skipped)
)

output_as_dict = dict(
runner_options=runner.dry_run_runner_options,
ats_tests_to_run=labels_run_wrapped_double_quotes,
ats_tests_to_skip=labels_skip_wrapped_double_quotes,
)
output = runner.dry_run_runner_options + sorted(labels_to_run)
if dry_run_output_path is not None:
with open(dry_run_output_path, "w") as fd:
fd.write(" ".join(output) + "\n")
click.echo(f"ATS_TESTS_TO_RUN=\"{' '.join(output)}\"")
fd.write(
" ".join(
runner.dry_run_runner_options + labels_run_wrapped_double_quotes
)
+ "\n"
)
with open(str(dry_run_output_path) + "_skipped", "w") as fd:
fd.write(
" ".join(
runner.dry_run_runner_options + labels_skip_wrapped_double_quotes
)
+ "\n"
)
with open(str(dry_run_output_path) + ".json", "w") as fd:
fd.write(json.dumps(output_as_dict) + "\n")

click.echo(json.dumps(output_as_dict))
click.echo(
f"ATS_TESTS_TO_RUN={' '.join(runner.dry_run_runner_options + labels_run_wrapped_double_quotes)}"
)
click.echo(
f"ATS_TESTS_TO_SKIP={' '.join(runner.dry_run_runner_options + labels_skip_wrapped_double_quotes)}"
)
Comment on lines +364 to +370
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This stdout is nearly but not quite json parseable.
Most tools will do one of two things:

  1. print json on stdout, and non-json on stderr
  2. print a human-friendly form by default and only json if given --json

If you choose one of these, the --dry-run-output-path option can go away: tee(1) fills that need, then.



def _fallback_to_collected_labels(
Expand Down
55 changes: 42 additions & 13 deletions tests/commands/test_invoke_labelanalysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,13 +123,19 @@ def test_labelanalysis_help(self, mocker, fake_ci_provider):
" --max-wait-time INTEGER Max time (in seconds) to wait for the label",
" analysis result before falling back to running",
" all tests. Default is to wait forever.",
" --dry-run Print list of tests to run and options that need",
" to be added to the test runner as a space-",
" separated list to stdout. Format is",
' ATS_TESTS_TO_RUN="<options> <test_1> <test_2>',
' ... <test_n>"',
" --dry-run-output-path PATH Prints the dry-run list into dry_run_output_path",
" (in addition to stdout)",
" --dry-run Print list of tests to run AND tests skipped",
" (and options that need to be added to the test",
" runner) to stdout. Also prints the same",
" information in JSON format. JSON will have keys",
" 'ats_tests_to_run', 'ats_tests_to_skip' and",
" 'runner_options'. List of tests to run is",
" prefixed with ATS_TESTS_TO_RUN= List of tests to",
" skip is prefixed with ATS_TESTS_TO_SKIP=",
" --dry-run-output-path PATH Prints the dry-run list (ATS_TESTS_TO_RUN) into",
" dry_run_output_path (in addition to stdout) AND",
" prints ATS_TESTS_TO_SKIP into",
" dry_run_output_path_skipped AND prints dry-run",
" JSON output into dry_run_output_path.json",
" -h, --help Show this message and exit.",
"",
]
Expand Down Expand Up @@ -231,7 +237,7 @@ def test_invoke_label_analysis_dry_run(self, get_labelanalysis_deps, mocker):
fake_runner = get_labelanalysis_deps["fake_runner"]

label_analysis_result = {
"present_report_labels": ["test_present"],
"present_report_labels": ["test_present", "test_in_diff", "test_global"],
"absent_labels": ["test_absent"],
"present_diff_labels": ["test_in_diff"],
"global_level_labels": ["test_global"],
Expand Down Expand Up @@ -278,9 +284,14 @@ def test_invoke_label_analysis_dry_run(self, get_labelanalysis_deps, mocker):
print(result.output)
assert result.exit_code == 0
assert (
'ATS_TESTS_TO_RUN="--labels test_absent test_global test_in_diff'
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}'
in result.output
)
assert (
'ATS_TESTS_TO_RUN=--labels "test_absent" "test_global" "test_in_diff"'
in result.output
)
assert 'ATS_TESTS_TO_SKIP=--labels "test_present"' in result.output

def test_invoke_label_analysis_dry_run_with_output_path(
self, get_labelanalysis_deps, mocker
Expand All @@ -289,7 +300,7 @@ def test_invoke_label_analysis_dry_run_with_output_path(
fake_runner = get_labelanalysis_deps["fake_runner"]

label_analysis_result = {
"present_report_labels": ["test_present"],
"present_report_labels": ["test_present", "test_in_diff", "test_global"],
"absent_labels": ["test_absent"],
"present_diff_labels": ["test_in_diff"],
"global_level_labels": ["test_global"],
Expand Down Expand Up @@ -332,20 +343,38 @@ def test_invoke_label_analysis_dry_run_with_output_path(
],
obj={},
)
labels_file = Path("ats_output_path")
print(result)
print(result.output)
Comment on lines +346 to +347
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe clean the prints up, but there are other tests that leave em in so idc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prints only show up if the test fails. In this case they are very useful to debug why the test failed. That's why some tests leave them there

ats_output_path = "ats_output_path"
labels_file = Path(ats_output_path)
skip_labels_file = Path(ats_output_path + "_skipped")
json_output = Path(ats_output_path + ".json")
assert labels_file.exists() and labels_file.is_file()
assert skip_labels_file.exists() and skip_labels_file.is_file()
assert json_output.exists() and json_output.is_file()
with open(labels_file, "r") as fd:
assert fd.readlines() == [
"--labels test_absent test_global test_in_diff\n"
'--labels "test_absent" "test_global" "test_in_diff"\n'
]
with open(skip_labels_file, "r") as fd:
assert fd.readlines() == ['--labels "test_present"\n']
with open(json_output, "r") as fd:
assert fd.readlines() == [
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}\n'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to doubly-quote the the test names? Presumably there's no test named "test_absent" (including the quotes).

This looks over-quoted, to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We recently had a customer run into trouble because one of their parametrized tests looked something like path/to/test::TestClass::test_name[option -or another]. When those were piped into pytest it assumed the -or was a command line option for pytest itself and failed.

That's why we are now double quoting all labels.

]
Comment on lines +362 to 364
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This represents the intended value more clearly, and should(?) yield better failure messages. It has the added side-effect of ignoring any whitespace changes, which I don't believe you intend to care about in this particlar test.

Suggested change
assert fd.readlines() == [
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}\n'
]
assert json.load(fd) == {
"runner_options": ["--labels"],
"ats_tests_to_run": [
"\"test_absent\"",
"\"test_global\"",
"\"test_in_diff\""
],
"ats_tests_to_skip": [
"\"test_present\"",
]
}

The included quotes are more clearly pretty weird, here.

mock_get_runner.assert_called()
fake_runner.process_labelanalysis_result.assert_not_called()
print(result.output)
assert result.exit_code == 0
assert (
'ATS_TESTS_TO_RUN="--labels test_absent test_global test_in_diff'
'{"runner_options": ["--labels"], "ats_tests_to_run": ["\\"test_absent\\"", "\\"test_global\\"", "\\"test_in_diff\\""], "ats_tests_to_skip": ["\\"test_present\\""]}'
in result.output
)
assert (
'ATS_TESTS_TO_RUN=--labels "test_absent" "test_global" "test_in_diff"'
in result.output
)
assert 'ATS_TESTS_TO_SKIP=--labels "test_present"' in result.output

def test_fallback_to_collected_labels(self, mocker):
mock_runner = mocker.MagicMock()
Expand Down