-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coverage report is not uploaded when there are failing tests #67
Comments
Nothing against fixing this in one of ways suggested. Double thinking on the issue: I have some doubts regarding how good or bad is to enable test coverage reporting while test are failing. The failure of the test can potentially affect directly the values of code coverage and if that happen the noise introduced in the PR is higher. The situation for flaky test failures could be even worse. In the other hand, the lack of the coverage report can confuse the developer and think it is a problem in the infra somehow. |
I think it's better to have coverage data from failing tests than to have no coverage data at all |
It seems to be a way to allow coverage regardless of ci on the configuration called On a broader perspective I agree with Jose's point of adding noise to a PR and taking non accurate values as a correct value to merge the PR. However I see the value of seeing if the changes made only by you are increasing or decreasing the code coverage. @scpeters Are tests easily distinguishable between one another that we can test against the changes vs the world? |
I find lots of value in seeing how code coverage is changing due to a given pull request, both as a reviewer and a pull request author. That is one of the best times to add a test, and the coverage report is useful to reviewers. In terms of the absolute code coverage values from automated testing, I consider it a necessary but not sufficient indicator of test quality. You can add a test that executes code but has no expectations or even wrong expectations on the behavior, and that will show up as covered. So personally, I take the absolute code coverage metric data with a grain of salt anyway and would rather not sacrifice the coverage reports for each pull request. I suppose we could try to use different logic for pull requests and release branch commits, but if we don't add that distinction, I would prefer to keep code coverage reports even when tests are failing |
The action script appears to halt if there are failing tests and doesn't compute or upload the coverage results. I can think of two ways to fix this:
make test || true
so that the script can continue and then add a step after uploading test coverage that fails if there were any failing tests.colcon
for building and testing so thatcolcon test
can be run in the place wheremake test
is currently run, then compute and upload test coverage, then runcolcon test-result
at the very end, which will fail if there are failing tests.The text was updated successfully, but these errors were encountered: