-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarifications on dimensions #18
Comments
Hi, thank you for your interest! Let me clarify the data dimensions first. Given a 4D MRI data [W, H, Z, T], in which [W, H, Z] are the dimensions to describe the MRI scan in the 3D space representing the length at the three (x,y,z) axes. For a 4D MRI scan with multiple acquisitions, the additional 4th dimension T indicates the number of observations of the same 3D structure. Due to the randomness in each acquisition, each 3D observation may be noisy in a different way. Denoising methods (including DDM2) are designed to try to learn consistency across those different noisy observations to recover a clean 3D structure. For the choice of Back to your problem, I think it still makes sense to set the same parameters for |
Thanks so much, @tiangexiang, for your fast response to this issue and detailed explanations on Now, after solving a minor bug as in the following,
I added a new line
Do you have any ideas on how to possibly solve this issue? I have tried some modifications in |
Hi, it seems the error comes from the shape of the raw_input. Can you make sure the tensor |
To double-check the shapes, I added a few lines in
And in the results, I get as follows:
|
it seems that the data loaded for training is good, but the data loaded for validation may not be in the right shape. In any forms, the problem must come from how you load the data. I am not able to provide meaningful suggestions without more information, I still suggest to inspect the tensor shape at all possible locations. |
BTW, for the previous issue case, I even added
|
This is so weird, how come the training load is successful but validation fails?! My dataset is simply
|
Hi, thank you for this amazing paper. I wanted to ask you very few questions to elaborate in very detail.
I have seen it in multiple places (i.e.
mri_dataset.py
) that you definevalid_mask = [10, 160]
. Considering your data size as (81, 106, 76, 160), are there any particular reasons you chooseval_volume _idx = 40
and selectvalid_mask = [10, 160]
in hardi150.json? The reason I asked is that I am working with (118, 118, 25, 56) 4D-diffusion data and there are some issues I fall into when defining mri_dataset.py as follows:Do you have any slightest ideas where could this originate from?
The text was updated successfully, but these errors were encountered: