-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instability of artifact-caching-proxy #4442
Comments
Starting analysing logs on ACP side |
For each of the failing requests found in the past 15 days (including each one you folks logged) ACP did report an error due to the upstream, in the following categories:
We also had 1 occurence |
=> The errors are definitively not due to an ACP problem. By design, it "reports" the error. => We could check if we can "retry" the upstream in case of error, I need to recall which cases could be caught |
@MarkEWaite did open a PR , based on a discussion we had during the previous infra meeting: jenkinsci/bom#4095 The goal is to "pre-heat" the cache to decrease the probability of facing these issues |
Service(s)
Artifact-caching-proxy
Summary
Bruno had to run the weekly BOM release process five times today (2024-12-06) because of errors like the following:
Could not transfer artifact com.google.crypto.tink:tink:jar:1.10.0 from/to azure-aks-internal (http://artifact-caching-proxy.artifact-caching-proxy.svc.cluster.local:8080/): Premature end of Content-Length delimited message body (expected: 2,322,048; received: 1,572,251)
Here's the issue where he tracked the build numbers so you can see the specific failures:
jenkinsci/bom#4066
I also had similar issues doing a BOM
weekly-test
against a core RC that I'm working on:Since I started working on BOM the past couple of months, this problem seems to be getting worse/more unstable as the weeks progress.
Reproduction steps
Unfortunately, it is not reproducible on demand.
The text was updated successfully, but these errors were encountered: