Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Monitoring] Adding a metric for task outcome #4458

Merged
merged 13 commits into from
Dec 10, 2024
5 changes: 5 additions & 0 deletions src/clusterfuzz/_internal/bot/tasks/utasks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,11 @@ def __exit__(self, _exc_type, _exc_value, _traceback):
monitoring_metrics.UTASK_SUBTASK_E2E_DURATION_SECS.add(
e2e_duration_secs, self._labels)

outcome = 'error' if _exc_type else 'success'
vitorguidi marked this conversation as resolved.
Show resolved Hide resolved
monitoring_metrics.TASK_OUTCOME_COUNT.increment({
**self._labels, 'outcome': outcome
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we have a nice enum of possible errors, I'd much rather we be more specific here and record it as the outcome: e.g. result: 'NO_ERROR' or result: 'BUILD_SETUP_FAILED'. It would greatly help in debugging problems.

Exceptions can be bucketed as their own result: 'UNHANDLED_EXCEPTION'.

Do you think it will pose a problem with metric cardinality?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm now that I think of it, we should get rid of job

For oss fuzz:

15 tasks
1300 projects ~ 1300 jobs
3 subtasks (pre main post)
2 modes (batch/queue)
3 platforms (linux mac win)
2 outcomes (fail/success)

This goes for around 700k, which will bite us. We could swap job out for all outcomes, and that will be around ~16k possible label combinations, which should be fine. Wdyt?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jonathanmetzman the current state of the subtask duration metric allows for around 350k different label combinations, GCP's recommendation is at most 30k. Should we get rid of job in this context manager as a whole?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the limit? I think there's other things that can be removed

  1. I think modes is definitely not needed, the two modes won't live side by side much longer.
  2. I think platform may be unneeded, it can be obtained through the job name.
    Would reducing cardinality by 6X help?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, there's currently about 1000 platforms in oss-fuzz but that is going away. Will that help?

Copy link
Collaborator Author

@vitorguidi vitorguidi Dec 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is actually 40 distinct values under that enum.
15 tasks * 40 outcomes * 1300 jobs * 3 subtasks > 1M, so yeah, no chance at all to get the discriminated outcomes, while keeping jobs.

Another option is to spin a separate metric, TASK_OUTCOME_BY_ERROR_TYPE, for which we track all the labels except job, and manage to stay under 30k distinct labels.

Wdyt @letitz

As far as converting the proto value to a name goes, it can be done like this:

uworker_msg_pb2.ErrorType.Name(uworker_msg_pb2.ErrorType.NO_ERROR)
'NO_ERROR'

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry why 40 outcomes?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

38 ErrorTypes from the utask_main output, 'N/A' (to indicate no error), and 'UNHANDLED_EXCEPTION', as suggested by titouan

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FIne with me to have a separate metric for error types that does not group by job.

I think modes is definitely not needed, the two modes won't live side by side much longer.
I think platform may be unneeded, it can be obtained through the job name.
Would reducing cardinality by 6X help?

Mode does not contribute much to cardinality since we have a mostly static mapping from (task, platform) to mode. Platform is actually free since as you point out, it is implied by job. So there are not 3 copies of each job, one for each platform. IIRC this is what actually matters, not the potential cardinality if we generated all possible labels.

})


def ensure_uworker_env_type_safety(uworker_env):
"""Converts all values in |uworker_env| to str types.
Expand Down
13 changes: 13 additions & 0 deletions src/clusterfuzz/_internal/metrics/monitoring_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,7 @@
monitor.StringField('job'),
],
)

TASK_RATE_LIMIT_COUNT = monitor.CounterMetric(
'task/rate_limit',
description=('Counter for rate limit events.'),
Expand All @@ -250,6 +251,18 @@
monitor.StringField('argument'),
])

TASK_OUTCOME_COUNT = monitor.CounterMetric(
'task/outcome',
description=('Counter metric for task outcome (success/failure).'),
field_spec=[
monitor.StringField('task'),
monitor.StringField('job'),
monitor.StringField('subtask'),
monitor.StringField('mode'),
monitor.StringField('platform'),
monitor.StringField('outcome'),
])

UTASK_SUBTASK_E2E_DURATION_SECS = monitor.CumulativeDistributionMetric(
'utask/subtask_e2e_duration_secs',
description=(
Expand Down
Loading