Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coverage report is not uploaded when there are failing tests #67

Open
scpeters opened this issue Nov 6, 2023 · 4 comments
Open

Coverage report is not uploaded when there are failing tests #67

scpeters opened this issue Nov 6, 2023 · 4 comments

Comments

@scpeters
Copy link
Contributor

scpeters commented Nov 6, 2023

The action script appears to halt if there are failing tests and doesn't compute or upload the coverage results. I can think of two ways to fix this:

  1. Run make test || true so that the script can continue and then add a step after uploading test coverage that fails if there were any failing tests.
  2. Switch to colcon for building and testing so that colcon test can be run in the place where make test is currently run, then compute and upload test coverage, then run colcon test-result at the very end, which will fail if there are failing tests.
@j-rivero
Copy link
Contributor

j-rivero commented Nov 6, 2023

Nothing against fixing this in one of ways suggested. Double thinking on the issue: I have some doubts regarding how good or bad is to enable test coverage reporting while test are failing. The failure of the test can potentially affect directly the values of code coverage and if that happen the noise introduced in the PR is higher. The situation for flaky test failures could be even worse.

In the other hand, the lack of the coverage report can confuse the developer and think it is a problem in the infra somehow.

@scpeters
Copy link
Contributor Author

scpeters commented Nov 6, 2023

I think it's better to have coverage data from failing tests than to have no coverage data at all

@claraberendsen
Copy link

It seems to be a way to allow coverage regardless of ci on the configuration called require_ci_to_pass.
There is another flag that points to notify_wait_for_ci that allows you to have codecov report without waiting for the completion of the other statuses.
However taking a closer look at the configuration of this action it seems like we are running the coverage report but within our custom action and I'm not sure that this flags on codecov.yml would be taken into account and would be fine implementing any of the strategies outlined above.

On a broader perspective I agree with Jose's point of adding noise to a PR and taking non accurate values as a correct value to merge the PR. However I see the value of seeing if the changes made only by you are increasing or decreasing the code coverage.
I don't think showing inaccurate metrics on test failures is the way to move forward because you are introducing uncertainty and nuances to the metric ... is this code coverage a correct representation of the changes made? My argument would probably go towards more isolation either using codecov flags or something similar to reduce that noise as much as possible.

@scpeters Are tests easily distinguishable between one another that we can test against the changes vs the world?
My main concern is that you won't be able to take the report metric at face value and that is key to any metric in my opinion.

@scpeters
Copy link
Contributor Author

scpeters commented Nov 6, 2023

@scpeters Are tests easily distinguishable between one another that we can test against the changes vs the world? My main concern is that you won't be able to take the report metric at face value and that is key to any metric in my opinion.

I find lots of value in seeing how code coverage is changing due to a given pull request, both as a reviewer and a pull request author. That is one of the best times to add a test, and the coverage report is useful to reviewers.

In terms of the absolute code coverage values from automated testing, I consider it a necessary but not sufficient indicator of test quality. You can add a test that executes code but has no expectations or even wrong expectations on the behavior, and that will show up as covered.

So personally, I take the absolute code coverage metric data with a grain of salt anyway and would rather not sacrifice the coverage reports for each pull request. I suppose we could try to use different logic for pull requests and release branch commits, but if we don't add that distinction, I would prefer to keep code coverage reports even when tests are failing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants