-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase the time for Galaxy cleanup again #1195
Conversation
I assume it takes even longer than 1h.
|
We need to increase the timeout ... outch. |
Yes, we can migrate this to the maintenance node and update the bashrc if necessary to export a This also means we must configure a logrotate to clean up those log files on the maintenance node. I will get back to this once the maintenance node is back online. |
I would be open to it logging to stderr / journald, but it's because that's how the script in galaxy works, PRs welcome! |
Please merge if you think it's fine. Im still debugging our posters problems. |
Cool, I am trying to catch up on things. Also, I think this is fine. We can test it, and if we find any side effects, then we can reduce the interval again like once a day or so. |
I just found out that the task was deployed via this. So the interval must be updated in the group vars instead. I will create a PR reflecting yours shortly. |
I don't know why we decreased it 5 years ago.
If we purge datasets older than 60days, I think we can do it less frequently and maybe avoid the long-running, maybe overlapping transactions.
A counterargument would be that once every 2 days more datasets need to be deleted at once, producing more IO spikes? I don't know.
@sanjaysrikakulam this job should run on the maintenance node.