-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OTA-1378,OTA-1379: add retry logic for pulling images and less logs for sigs #969
OTA-1378,OTA-1379: add retry logic for pulling images and less logs for sigs #969
Conversation
cincinnati/src/plugins/internal/graph_builder/release_scrape_dockerv2/registry/mod.rs
Show resolved
Hide resolved
4d9eb3f
to
0368585
Compare
@PratikMahajan: This pull request references OTA-1379 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
&repo, | ||
&tag, | ||
e | ||
if tag.contains(".sig") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tag-based discovery is one option for finding Sigstore signatures. In the future, we might move to listing referrers (containers/image#2030). But if we do, the failure modes for this line:
- Misclassifying a non-sig as a Sigstore signature because it happens to use this tag structure, or
- Misclassifying a Sigstore signature as a non-sig, non-release ignored image,
both seem low, so 🤷, I'm ok with this heuristic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for misclassifying non-sig to sig will always be a risk if we do string comparison. but imo should be rare.
if signature gets classifies as non-sig, the logs should bring that into notice. Not a lot worried about the mismatch.
We can also change this logic when we get listing referrers in dkregistry and pull it downstream to cincinnati.
move the encountered signatures log from warn to debug and count the number of signature as well as invalid releases and log the count instead.
adds retry logic so we're more resilient to failures on container registry part. we try fetching the manifest and manifest ref for 3 times before ultimately failing.
retries fetching the blob instead of erroring out and erasing the progress till the point
0368585
to
31ceb1d
Compare
Err(e) => { | ||
// signatures are not identified by dkregistry and not useful for cincinnati graph, dont retry and return error | ||
if tag.contains(".sig") { | ||
return Err(e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and then this error bubbles up and is converted to a debug message via the .sig
branch of fetch_releases
's get_manifest_layers
handling in 022a8d6.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: PratikMahajan, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@PratikMahajan: This pull request references OTA-1378 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set. This pull request references OTA-1379 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/override ci/prow/customrust-cargo-test override known test failures |
@PratikMahajan: Overrode contexts on behalf of PratikMahajan: ci/prow/cargo-test, ci/prow/customrust-cargo-test In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@PratikMahajan: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This PR is only used to test openshift#969 Please ignore it, I will close it later
…X_REPLICAS We'd dropped 'replicas' in 8289781 (replace HPA with keda ScaledObject, 2024-10-09, openshift#953), following AppSRE advice [1]. Rolling that Template change out caused the Deployment to drop briefly to replicas:1 before Keda raised it back up to MIN_REPLICAS (as predicted [1]). But in our haste to recover from the incdent, we raised both MIN_REPLICAS (good) and restored the replicas line in 0bbb1b8 (bring back the replica field and set it to min-replicas, 2024-10-24, openshift#967). That means we will need some future Template change to revert 0bbb1b8 and re-drop 'replicas'. In the meantime, every Template application will cause the Deployment to blip to the Template-declared value briefly, before Keda resets it to the value it prefers. Before this commit, the blip value is MIN_REPLICAS, which can lead to rollouts like: $ oc -n cincinnati-production get -w -o wide deployment cincinnati NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR ... cincinnati 0/6 6 0 86s cincinnati-graph-builder,cincinnati-policy-engine quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest app=cincinnati cincinnati 0/2 6 0 2m17s cincinnati-graph-builder,cincinnati-policy-engine quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest app=cincinnati ... when Keda wants 6 replicas and we push: $ oc process -p MIN_REPLICAS=2 -p MAX_REPLICAS=12 -f dist/openshift/cincinnati-deployment.yaml | oc -n cincinnati-production apply -f - deployment.apps/cincinnati configured prometheusrule.monitoring.coreos.com/cincinnati-recording-rule unchanged service/cincinnati-graph-builder unchanged ... The Pod terminations on the blip to MIN_REPLICAS will drop our capacity to serve clients, and at the moment it can take some time to recover that capacity in replacement Pods. Changes like 31ceb1d (add retry logic to fetching blob from container registry, 2024-10-24, openshift#969) should speed new-Pod availability and reduce that risk. This commit moves the blip over to MAX_REPLICAS to avoid Pod-termination risk entirely. Instead, we'll surge unnecessary Pods, and potentially autoscale unnecessary Machines to host those Pods. But then Keda will return us to its preferred value, and we'll delete the still-coming-up Pods and scale down any extra Machines. Spending a bit of money on extra cloud Machines for each Template application seems like a lower risk than the Pod-termination risk, to get us through safely until we are prepared to remove 'replicas' again and eat its one-time replicas:1, Pod-termination blip. [1]: https://gitlab.cee.redhat.com/service/app-interface/-/blob/649aa9b681acf076a39eb4eecf0f88ff1cacbdcd/docs/app-sre/runbook/custom-metrics-autoscaler.md#L252 (internal link, sorry external folks)
…X_REPLICAS We'd dropped 'replicas' in 8289781 (replace HPA with keda ScaledObject, 2024-10-09, openshift#953), following AppSRE advice [1]. Rolling that Template change out caused the Deployment to drop briefly to replicas:1 before Keda raised it back up to MIN_REPLICAS (as predicted [1]). But in our haste to recover from the incident, we raised both MIN_REPLICAS (good) and restored the replicas line in 0bbb1b8 (bring back the replica field and set it to min-replicas, 2024-10-24, openshift#967). That means we will need some future Template change to revert 0bbb1b8 and re-drop 'replicas'. In the meantime, every Template application will cause the Deployment to blip to the Template-declared value briefly, before Keda resets it to the value it prefers. Before this commit, the blip value is MIN_REPLICAS, which can lead to rollouts like: $ oc -n cincinnati-production get -w -o wide deployment cincinnati NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR ... cincinnati 0/6 6 0 86s cincinnati-graph-builder,cincinnati-policy-engine quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest app=cincinnati cincinnati 0/2 6 0 2m17s cincinnati-graph-builder,cincinnati-policy-engine quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest app=cincinnati ... when Keda wants 6 replicas and we push: $ oc process -p MIN_REPLICAS=2 -p MAX_REPLICAS=12 -f dist/openshift/cincinnati-deployment.yaml | oc -n cincinnati-production apply -f - deployment.apps/cincinnati configured prometheusrule.monitoring.coreos.com/cincinnati-recording-rule unchanged service/cincinnati-graph-builder unchanged ... The Pod terminations on the blip to MIN_REPLICAS will drop our capacity to serve clients, and at the moment it can take some time to recover that capacity in replacement Pods. Changes like 31ceb1d (add retry logic to fetching blob from container registry, 2024-10-24, openshift#969) should speed new-Pod availability and reduce that risk. This commit moves the blip over to MAX_REPLICAS to avoid Pod-termination risk entirely. Instead, we'll surge unnecessary Pods, and potentially autoscale unnecessary Machines to host those Pods. But then Keda will return us to its preferred value, and we'll delete the still-coming-up Pods and scale down any extra Machines. Spending a bit of money on extra cloud Machines for each Template application seems like a lower risk than the Pod-termination risk, to get us through safely until we are prepared to remove 'replicas' again and eat its one-time replicas:1, Pod-termination blip. [1]: https://gitlab.cee.redhat.com/service/app-interface/-/blob/649aa9b681acf076a39eb4eecf0f88ff1cacbdcd/docs/app-sre/runbook/custom-metrics-autoscaler.md#L252 (internal link, sorry external folks)
we're adding a retry logic that tries to pull layers if it fails due to any reason. We retry for 3 times before ultimately failing.
Also moved the warn log for sig pulling to debug and added a counter to tell how many sig images we've ignored