-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seed job fails after Jenkins restart with backups enabled #607
Comments
Doesn't seem totally consistent, probably some race condition somewhere. |
We also noticed a similar issue. The seed job starts before the restore job finishes. This causes issues with Jenkins trying to re-index repos that it doesn't need to. If you go to Manage Jenkins and click the "Reload Configuration from Disk" button, it fixes the error @Bakies posted - but a better solution imo would be for the seed job to wait until the restore process is finished before triggering. |
Ah, thanks for that workaround, that button should be helpful to me : ) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this issue is still affecting you, just comment with any updates and we'll keep it open. Thank you for your contributions. |
I encountered this recently as well. I have solved it for now by slightly changing the backup.sh script. I added |
Hi! I was wondering if there is an update on this. I'm facing the same issue, when my jenkins pod dies, the new pod won't trigger the seed-jobs and fails with that error.
|
Do we have any updates on this issue ? We are also facing similar issues at our end. |
Anti-stale bot comment
…On Sat, Apr 16, 2022, 8:31 AM stale[bot] ***@***.***> wrote:
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. If this
issue is still affecting you, just comment with any updates and we'll keep
it open. Thank you for your contributions.
—
Reply to this email directly, view it on GitHub
<#607 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKS5HFI5XURAEWFV4EONPDVFKXLNANCNFSM5BGPJSVQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I can confirm that we've got the same problem:
Reloading configuration from disk via the appropriate option in Jenkins settings solves the problem. But that's something that shouldn't happen after restarting the jenkins-master pod. Image: |
This ^ should be the solution here: excluding the seeds jobs (with a regex) from the history backup. Adding good-first-issue. |
Describe the bug
After my Jenkins controller restarts the seed job fails to start. I think there's a race condition between the restore job and the seed job starting. Probably because the seed job is configured in JCasC the job is setup before the restore and it creates the file nextBuildNumber with a 1, and the restore may not override it? It takes a long time, if ever, before the seed job runs and restores the config for the rest of the jobs.
I'm currently thinking I will just exclude the seed jobs from backups. I don't think I particularly care about their history.
To Reproduce
Configure a seed job in JCasC
Run it a few times
Delete jenkins pod
Additional information
Kubernetes version: 1.19
Jenkins Operator version: v0.5.0
Add error logs about the problem here (operator logs and Kubernetes events).
jenkins-master container logs:
The text was updated successfully, but these errors were encountered: