-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No "pick up where you left off" option for failed downloads #27
Comments
Hey Jade, In the meantime, here's how you can use giant-squid list --json $query will give you a bunch of metadata about the jobs matching {
"801409":{
"obsid":1413666792,
"jobId":801409,
"jobType":"DownloadVisibilities",
"jobState":"Ready",
"files":[
{
"jobType":"Acacia",
"fileUrl":"https://projects.pawsey.org.au/mwa-asvo/1413666792_801409_vis.tar?AWSAccessKeyId=...",
"filePath":null,
"fileSize":152505477120,
"fileHash":"d6dfb7391a495b0eb07cc885808e9e8058e90ec3"
}
]
}
} you can chuck If you want to automated this for many jobs you can use giant-squid list -j --states=ready -- $obslist \
| jq -r '.[]|[.jobId,.files[0].fileUrl//"",.files[0].fileSize//"",.files[0].fileHash//""]|@tsv' \
| while read -r jobid url size hash; do
[ -f ${obsid}.tar ] && continue
wget $url -O${obsid}.tar --progress=dot:giga --wait=60 --random-wait
done |
Hi Jade, As Dev says, we currently don't have a continue-from-where-you-left-off feature as such, but it would be extremely valuable especially for large downloads. So it will definitely be on our roadmap for a future release. In the meantime, I think Dev has used the above technique successfully, so please give that a go and let us know how it goes! |
oh and @baron-de-montblanc @d3v-null - FYI you can also pass to |
Hello, I am trying to download some rather large observations from ASVO to our group's supercomputer through giant-squid. It is very common for the download to fail (see attached screenshot for example), probably due to the connection getting interrupted.
My question is, is there an option/flag one can use with giant-squid to tell it to resume the download from where it crashed? (Or, alternatively, how could I successfully download these ~50Gb observations without it crashing?)
The text was updated successfully, but these errors were encountered: