-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report debugging information to mitigate memory overrun #235
Comments
@fedorov I've created #236 to address this issue partially. We need to discuss this question further because I've encountered a few challenges during my attempts to deliver the adequate solutions:
|
Thanks for the investigation @pedrokohler. If we can get it to work, I think option 2, streaming, is the best. Regarding point 3, can you try doing a HEAD request directly against a google dicom store without going through the proxy? |
@pieper I've just tried making requests directly against the google dicom store with no luck, take a look: Still looking at the stream thing, but it looks promising. |
Interesting, thanks for checking this @pedrokohler . It looks like you checked a QIDO studies endpoint. I wonder if a WADO request for ANN data would behave the same. Just thinking that QIDO could be streaming results from a database while WADO would be serving basically static content. |
@pieper if you take a close look, my second request in the screenshot above is a WADO request |
Oh, okay. Then I guess it's a limitation of the dicomweb implementation. I'm not sure if there's anything in the standard about whether this should be supported or not. Maybe it's something that Google would agree has utility. Did you happen to try range requests? Maybe that would work even if you don't know the total size. |
@pieper what do you mean by range requests? |
Instead of requesting the entire binary you can retrieve chunks. |
@pedrokohler to help with debugging it in an alternative manner, can you please log in the JS console the number of annotations available in this attribute: https://dicom.innolitics.com/ciods/microscopy-bulk-simple-annotations/microscopy-bulk-simple-annotations/006a0002/006a000c ? This is in parallel to what you are trying to do with bulk data. |
@pieper the range header seems to be completely ignored by the google dicom store |
Thanks @pedrokohler - that makes sense but it was worth trying. Another thing that might help is that we should be able to get progress events from the dicomweb-client to give feedback to the user and potentially abort if we think there's too much data. I took a look but I don't think Slim is currently reporting progress for dicomweb-client requests. Do you think that would help? This might help in addition to using the NumberOfAnnotations value to estimate the size of then.bulk data. |
@pieper unfortunately, the google dicom store does not send us back the Content-length header when we make the bulk data request. Because of this, the total attribute of the progress event is always 0 and progress is not computable. See: However I think it's still possible to add some button to allow the user to abort the request. We could also add the amount of bytes downloaded so far to this interface (the |
The idea is to abort automatically if the data exceeds a given threshold e.g. NumberOfAnnotations > X. |
Can't we also watch the "loaded" parameter in the progress event and abort if it starts getting too large? |
yes, we can do that |
Large ANN objects can lead to exhausting browser RAM, and (at least in some situations) may lead to complete lockup of the computer.
To help debug this, we discussed to implement the following in the developer console:
Related to but distinct from #230.
The text was updated successfully, but these errors were encountered: