-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report failures of periodic jobs to the cluster-api Slack channel #10520
Comments
This issue is currently awaiting triage. CAPI contributors will take a look as soon as possible, apply one of the Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Oh wow, yeah that would be a great thing. I just fear that it may pollute the channel too much. But we could try and fail fast by asking for feedback if it is too much later on in the community meeting or via a slack thread/poll. |
do we know if this respects |
I'm not sure if it respects that. We could try and rollback if it doesn't? |
If it still pollutes the channel too much after considering (I"m currently guessing that we would get one Slack message for every mail that we get today, but I don't know) |
One slack message per mail would be perfect - more would disrupt the channel WDYT about enabling it for CAPV first? |
Also fine with making the change and rolling back if it doesn't work |
For CAPO we get a message for every failure and email only after 2 failures in a row. I think it has been tolerable for us, but that indicates it does not check |
Hm okay, every failure is just too much. So we should probably take a closer look at the configuration / implementation. One message for every failure just doesn't make sense for the amount of tests/failures we have (the signal/noise ratio is just wrong) |
+1 to test this if we find a config reasonably noisy (but not too much noisy) /priority backlog |
+1 from my side too. Tagging CI lead @Sunnatillo |
Sounds great. I will take a look |
I guess this |
/assign @Sunnatillo |
@Sunnatillo |
Thank you for the update. I will open the issue in test-infra, try to find the way to do it. |
I opened an issue regarding this in test-infra: |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Maybe let's close this here until kubernetes-sigs/prow#195 has been implemented? (which might take a very long time if nobody volunteers for it) |
As per comment above |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I noticed that CAPO is reporting periodic test failures to Slack, e.g.: https://kubernetes.slack.com/archives/CFKJB65G9/p1713540048571589
I think think this is a great way to surface issues with CI (and also folks can directly start a thread based on a Slack comment like this)
This could be configured ~ like this: https://github.com/kubernetes/test-infra/blob/5d7e1db75dce28537ba5f17476882869d1b94b0a/config/jobs/kubernetes-sigs/cluster-api-provider-openstack/cluster-api-provider-openstack-periodics.yaml#L48-L55
What do you think?
The text was updated successfully, but these errors were encountered: