-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Monitoring] Adding a metric for task outcome #4458
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utask_main is run with utasks.uworker_bot_main
You might want to catch this too.
872273a
to
53cdc9e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
# failure. | ||
outcome = 'error' if _exc_type or self.saw_failure else 'success' | ||
monitoring_metrics.TASK_OUTCOME_COUNT.increment({ | ||
**self._labels, 'outcome': outcome |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we have a nice enum of possible errors, I'd much rather we be more specific here and record it as the outcome: e.g. result: 'NO_ERROR'
or result: 'BUILD_SETUP_FAILED'
. It would greatly help in debugging problems.
Exceptions can be bucketed as their own result: 'UNHANDLED_EXCEPTION'
.
Do you think it will pose a problem with metric cardinality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm now that I think of it, we should get rid of job
For oss fuzz:
15 tasks
1300 projects ~ 1300 jobs
3 subtasks (pre main post)
2 modes (batch/queue)
3 platforms (linux mac win)
2 outcomes (fail/success)
This goes for around 700k, which will bite us. We could swap job out for all outcomes, and that will be around ~16k possible label combinations, which should be fine. Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jonathanmetzman the current state of the subtask duration metric allows for around 350k different label combinations, GCP's recommendation is at most 30k. Should we get rid of job in this context manager as a whole?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the limit? I think there's other things that can be removed
- I think modes is definitely not needed, the two modes won't live side by side much longer.
- I think platform may be unneeded, it can be obtained through the job name.
Would reducing cardinality by 6X help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, there's currently about 1000 platforms in oss-fuzz but that is going away. Will that help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is actually 40 distinct values under that enum.
15 tasks * 40 outcomes * 1300 jobs * 3 subtasks > 1M, so yeah, no chance at all to get the discriminated outcomes, while keeping jobs.
Another option is to spin a separate metric, TASK_OUTCOME_BY_ERROR_TYPE, for which we track all the labels except job, and manage to stay under 30k distinct labels.
Wdyt @letitz
As far as converting the proto value to a name goes, it can be done like this:
uworker_msg_pb2.ErrorType.Name(uworker_msg_pb2.ErrorType.NO_ERROR)
'NO_ERROR'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry why 40 outcomes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NO_ERROR = 0; |
38 ErrorTypes from the utask_main output, 'N/A' (to indicate no error), and 'UNHANDLED_EXCEPTION', as suggested by titouan
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FIne with me to have a separate metric for error types that does not group by job.
I think modes is definitely not needed, the two modes won't live side by side much longer.
I think platform may be unneeded, it can be obtained through the job name.
Would reducing cardinality by 6X help?
Mode does not contribute much to cardinality since we have a mostly static mapping from (task, platform) to mode. Platform is actually free since as you point out, it is implied by job. So there are not 3 copies of each job, one for each platform. IIRC this is what actually matters, not the potential cardinality if we generated all possible labels.
Merging this for the sake of quick iteration, we can revisit this if folks feel like it is necessary |
### Motivation We currently have no metric that tracks the error rate for each task. This PR implements that, and the error rate can be obtained by summing up the metric with outcome=failure, divided by the overall sum. This is useful for SLI alerting. Part of #4271
### Motivation This merges #4489, #4458 and #4483 to the chrome temporary deployment branch The purpose is to have task error rate metrics, and log what old testcases are polluting the testcase upload metrics, so we can figure out if a purge is necessary --------- Co-authored-by: jonathanmetzman <[email protected]>
It may not be useful for alerting, but I for one find this data interesting. I would not have assumed that most tasks would end in "temporary failure". Maybe that's the breakdown we want though here? "success", "temporary error"/"retry" and "failure"? |
This is too complex IMO, I agree with Jonathan's approach of only considering UNHANDLED_EXCEPTION as failure. Breaking down by success/retry/failure is too janky, as we would have to map these 38 ErrorType enums to 3 sets, leading to polluting the codebase and make things hard to understand. Also, for every new ErrorType added, this metric would have to be updated, adding further friction to development. I would much rather treat only UNHANDLED_EXCEPTION as an error, since it will not lead to false positives, and simplify things. |
I would have thought the opposite. Right now, if I want to know what it means for regression task to fail with Ultimately I would like to know when there are jobs whose tasks are failing too often, so I can go investigate. |
We can partition these errors on a best effort basis in three sets:
The above would belong in the TASK_OUTCOME_COUNT, under the 'outcome' label:
Wdyt? This solves the problem, it will at least be possible to filter for the unambiguous failures, and those will be the jobs to be further drilled down. |
Fine with me! @alhijazi might still want to have a say |
Given the 30k different label limitation, we will not be able to drill down by ErrorType and job simultaneously:
52k>30k, so this is as far as we can go. Restating: drilling down per job and ErrorType is an impossible requirement. |
…ss, maybe_retry and failure outcomes (#4499) ### Motivation #4458 implemented a task outcome metric, so we can track error rates in utasks, by job/task/subtask. As failures are expected for ClusterFuzz, initially only unhandled exceptions would be considered as actual errors. Chrome folks asked for a better partitioning of error codes, which is implemented here as the following outcomes: * success: the task has unequivocally succeeded, producing a sane result * maybe_retry: some transient error happened, and the task is potentially being retried. This might capture some unretriable failure condition, but it is a compromise we are willing to make in order to decrease false positives. * failure: the task has unequivocally failed. Part of #4271
…ss, maybe_retry and failure outcomes (#4499) ### Motivation #4458 implemented a task outcome metric, so we can track error rates in utasks, by job/task/subtask. As failures are expected for ClusterFuzz, initially only unhandled exceptions would be considered as actual errors. Chrome folks asked for a better partitioning of error codes, which is implemented here as the following outcomes: * success: the task has unequivocally succeeded, producing a sane result * maybe_retry: some transient error happened, and the task is potentially being retried. This might capture some unretriable failure condition, but it is a compromise we are willing to make in order to decrease false positives. * failure: the task has unequivocally failed. Part of #4271
Fine with me also! |
#4516) ### Motivation #4458 implemented an error rate for utasks, only considering exceptions. In #4499 , outcomes were split between success, failure and maybe_retry conditions. There we learned that the volume of retryable outcomes is negligible, so it makes sense to count them as failures. Listing out all the success conditions under _MetricRecorder is not desirable. However, we are consciously taking this technical debt so we can deliver #4271 . A refactor of uworker main will be later performed, so we can split the success and failure conditions, both of which are mixed in uworker_output.ErrorType. Reference for tech debt acknowledgement: #4517
#4516) ### Motivation #4458 implemented an error rate for utasks, only considering exceptions. In #4499 , outcomes were split between success, failure and maybe_retry conditions. There we learned that the volume of retryable outcomes is negligible, so it makes sense to count them as failures. Listing out all the success conditions under _MetricRecorder is not desirable. However, we are consciously taking this technical debt so we can deliver #4271 . A refactor of uworker main will be later performed, so we can split the success and failure conditions, both of which are mixed in uworker_output.ErrorType. Reference for tech debt acknowledgement: #4517
#4516) ### Motivation #4458 implemented an error rate for utasks, only considering exceptions. In #4499 , outcomes were split between success, failure and maybe_retry conditions. There we learned that the volume of retryable outcomes is negligible, so it makes sense to count them as failures. Listing out all the success conditions under _MetricRecorder is not desirable. However, we are consciously taking this technical debt so we can deliver #4271 . A refactor of uworker main will be later performed, so we can split the success and failure conditions, both of which are mixed in uworker_output.ErrorType. Reference for tech debt acknowledgement: #4517
Motivation
We currently have no metric that tracks the error rate for each task. This PR implements that, and the error rate can be obtained by summing up the metric with outcome=failure, divided by the overall sum.
This is useful for SLI alerting.
Part of #4271