Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

64 gradient directions at b=1000mm/s^2 and 1 b0. #13

Open
CallowBrainProject opened this issue Feb 1, 2020 · 12 comments
Open

64 gradient directions at b=1000mm/s^2 and 1 b0. #13

CallowBrainProject opened this issue Feb 1, 2020 · 12 comments
Labels
feedback Feedback

Comments

@CallowBrainProject
Copy link

Here is some feedback on 64 gradient directions (b=1000mm/s^2) and 1 b0 data I have without the opposite phase encoding direction.

See below the gm_norm, wm_norm, and csf_norm data.
Screen Shot 2020-02-01 at 4 31 28 PM
Screen Shot 2020-02-01 at 4 32 03 PM
Screen Shot 2020-02-01 at 4 33 00 PM

Also, see the dec.mif data and tractography data. I have some additional types of scans to give feedback on soon!
Screen Shot 2020-02-01 at 4 19 22 PM
Screen Shot 2020-02-01 at 4 47 42 PM

I would like to figure out how to normalize the gm, wm, and csf like signal into one image so that I can get an image with the percent (CSF, gm, and wm) to run a group-wise analysis within the hippocampus if possible. Any suggestions for next steps on how to run such an analysis? Thank you again for all the help so far and I look forward to hearing your thoughts on the data I have shared!

@CallowBrainProject CallowBrainProject added the feedback Feedback label Feb 1, 2020
@thijsdhollander
Copy link
Collaborator

Hi @CallowBrainProject,

Here is some feedback on 64 gradient directions (b=1000mm/s^2) and 1 b0 data I have without the opposite phase encoding direction.

Nice, another typical more routine "clinical" quality acquisition. Thanks for providing this as feedback!

See below the gm_norm, wm_norm, and csf_norm data.

Once more I'm pleasantly surprised by the quality of the results, given the limited qualities of the dataset. Most observations are equivalent to those posted in several of the other feedback posts; the gist is that this looks very nice. It's great to see that this observation consistently reproduces for different peoples' datasets. 😎👍

Also, see the dec.mif data and tractography data. I have some additional types of scans to give feedback on soon!

The WM FOD-based DEC map looks also very good. The image is zoomed out a bit too far to appreciate the FOD overlay, but judging by the 3-tissue maps and the DEC map, I reckon these should be fine. The tractography further confirms this must be the case. As the documentation mentions, for the tractography specifically, it's always worth to play around a bit with the -cutoff value to tailor it to your data, but also to your specific application (that is, if you're aiming for a tractography application of course). In your case, all looks generally ok; the tractography is only a bit affected at the far front end of the brain in the slice shown, which is clearly the consequence of the EPI distortions that weren't corrected due to absence of reverse phase encoded data. This is entire expected of course.

I would like to figure out how to normalize the gm, wm, and csf like signal into one image so that I can get an image with the percent (CSF, gm, and wm) to run a group-wise analysis within the hippocampus if possible. Any suggestions for next steps on how to run such an analysis?

Ok, this is a good opportunity to talk as well about the ranges of intensities in those tissue images; i.e. the ones you showed here. Typically (and after you've used mtnormalise), the more "natural" range of values in those images would be between 0 and 0.282... This is because 0.282... is the spherical harmonics (SH) coefficient for l=0 that's equivalent to a spherical integral of 1. So basically a value of 0.282... is the equivalent of "1 unit" of your respective tissue response function. However, you'll notice those maps also have values beyond 0.282... , and if you were to sum all 3 maps, they also wouldn't sum exactly to 0.282... or not even to a single unique "constant" value across the image. So in other words, there's regions that have more than "1 unit" of some tissue types, and regions that have more than "1" total summed tissue signal, and regions that have less than "1" total summed tissue signal. The reasons for this are due to not all voxel content being measured by the T2-weighted diffusion-weighted signal we get from the scanner, combined with the fact that (3-tissue) CSD works on the absolute signal itself; the signal is on purpose not normalised by the b=0 data, as opposed to many other diffusion models (including e.g. DTI, NODDI, etc...).

So that's a long story (it's ok if you don't entirely understand it, don't worry) to explain why we call the direct result from 3-tissue CSD just "tissue compartments" or maybe "tissue signals", or similar. But the main point here is that they're not fractions of anything. And this is where the tissue signal fractions come in, which we can compute from the above tissue compartments (or signals). We do this simply by normalising the 3-tissue compartments/signals to sum to 1 (unity) on a voxel-wise basis; i.e. we divide each tissue signal by the sum of all 3 tissue compartments/signals. I might provide a few tools in the future to streamline this process a bit for convenience, but here's the simple version of it. I'll assume you've indeed named your current output wmfod_norm.mif, gm_norm.mif and csf_norm.mif. As a side note, the below also works with wmfod.mif, gm.mif and csf.mif; i.e. without having run mtnormalise and yields the same result. I'd still recommend regardless to always run mtnormalise, just in case you use the outputs for other intermediate processing steps (e.g. registration or template building, etc...), where this is of importance for other reasons. So here we go:

  1. Both gm_norm.mif and csf_norm.mif are already single 3D volumes, but wmfod_norm.mif also includes higher order SH coefficients to represent the full FOD. We will only need the first volume of wmfod_norm.mif, which represents the size of the WM compartment/signal in the same way gm_norm.mif and csf_norm.mif do for the other tissues. Here's how to extract that first volume:
mrconvert wmfod_norm.mif wm_norm.mif -coord 3 0
  1. Next, we compute the sum of the 3 tissue compartments/signals:
mrcalc wm_norm.mif gm_norm.mif csf_norm.mif -add -add sum_norm.mif
  1. Finally, we divide each tissue compartment/signal image by the sum. There's a few quirks you could watch out for here (and I'll make sure to account for those in some future tool to do this automatically), but in this simplified version, I'll just account for non-brain voxels which you likely previously masked out in the pipeline. As these voxels likely have a zero intensity, we like to avoid dividing by zero, because that'll yield nonsense values (infinity or other). For brevity, I'll assume your mask is called mask.mif, but replace this yourself by the actual name of your mask of course. Here we go:
mrcalc mask.mif wm_norm.mif sum_norm.mif -divide 0 -if TW.mif
mrcalc mask.mif gm_norm.mif sum_norm.mif -divide 0 -if TG.mif
mrcalc mask.mif csf_norm.mif sum_norm.mif -divide 0 -if TC.mif

So this divides each tissue by the sum, but also sets the result to zero outside of the mask via mask.mif combined with the -if operator. The final images are T_W, T_G and T_C, using the names defined in Mito et al. 2019. However, note that in that work, we actually first averaged each tissue compartment/signal across the relevant regions we studied in that work, and only then normalised by the sum of those final (region-wise) 3-tissue compartments. So all of this, but then ROI-wise rather than voxel-wise. It's another 2 pages of text to explain what's the difference there, but I'll leave it at this here. 😉 Finally, due to numerical quirks, it might be that one or two of the tissue types in some voxels are actually a very very small negative numbers: those should've been zero (and they're still super-close to zero luckily), but "numerical imperfections" happen. Here that'll mean that even after the above commands, some fractions in some voxels might be very very small negative numbers, and others in the same voxel might therefore be very very very little above 1 (because stuff sums to 1 after the above). Again, in a future tool I'll make sure some explicit fixes are put in place to clamp values to proper ranges, but I wanted to keep things here as simple as possible.

So that's it! Don't forget you only get values within your (brain) mask of course, so if you later on go and study these maps in particular regions of interest, make sure your ROIs don't include voxels outside your (brain) mask; the above commands have set values to zero in those voxels. Feel free to post your TW.mif, TG.mif and TC.mif screenshots here for a sensibility check. They should look similar to the 3 signal fraction maps posted in this bit of feedback. The range will be very close to between 0 and 1, and you'll notice the difference e.g. due to the T_W map looking more "flat" in contrast within the WM area, etc... essentially, we've removed T2 shine-through effects here with the above steps, among other things.

Thank you again for all the help so far and I look forward to hearing your thoughts on the data I have shared!

No worries at all, always happy to help! 😃

Cheers,
Thijs

@CallowBrainProject
Copy link
Author

Following up on your previous offer, do these average responses seem acceptable for my data set? I assume the wm_response is supposed to have multiple lines of data?
average_response_csf.txt
average_response_gm.txt
average_response_wm.txt

I am specifically having issues with one of my participants.

See the following for trouble shooting.
images
TC.nii.gz
TG.nii.gz
TW.nii.gz
individual response function
csf_response.txt
gm_response.txt
wm_response.txt

Looking forward to your feedback

@thijsdhollander
Copy link
Collaborator

Hi Daniel,

I took a quick look at all files; I think I've got a pretty good idea where a problem might've popped up for this one participant. The question is more how it popped up; but we can backtrack our way to that.

First things first:

Following up on your previous offer, do these average responses seem acceptable for my data set? I assume the wm_response is supposed to have multiple lines of data?

Each response function file in your case should have 2 lines of numbers, related to b=0 and b=1000. Both GM and CSF response functions should only have 1 number per line (so 2 numbers overall). The WM response on the other hand, should have 6 numbers per line (this is equivalent to an lmax=10 anisotropic response function). That's all correct in your files.

Other than that, I checked average_response_wm.txt in shview and pressed the right arrow button to see the b=1000 single-fibre WM response: that looks all good as well; a nice typical disk-shape with the amount of sharpness typically expected for b=1000.

I also checked all 3 average response functions and computed their "signal decay metric" (SDM). You can see those values reported by ss3t_csd_beta1 as well when you (did) run it on any / all of your subjects with the average response functions. Each time, it should've been the same values, as they relate directly (and only) to the response functions. For your average WM, GM and CSF response functions, the SDM values are respectively 0.69428, 1.00745 and 3.6759. Those are also all sensible and entirely expected values. They should increase in value from WM to GM to CSF, with CSF having the much higher SDM compared to the other tissues WM and GM. So nothing to worry here. 👍

Secondly, to briefly comment on

individual response function

...for that one subject. So in principle, this doesn't matter, as you ran the SS3T-CSD with the average response functions of course. But! Apart from that fact, this is coincidentally a good check to see nothing unusual is wrong with that one participant's dataset on a large / global scale. So I've checked those 3 individual response functions as well. Long story short: everything I've described above about the average response functions, applies here as well. They all look great. The disk shape looks very similar to the average response function, and the SDM values of these individual response functions are respectively 0.6898, 0.978317 and 3.78922. Note also these are remarkably similar to the above SDM values, which is again great news. This hints that dwi2response dhollander has performed very consistently across your subjects. This also shows in another way that this participant's dataset must at least generally be fine. 👍

Finally, the main issue we'll have to dig a bit to uncover; your actual tissue signal fractions T_W, T_G and T_C for that one participant. So I've looked at the files, and I've got an idea what must've at least been the case at some point in your calculations. I observed the T_W image is either 0 or 'nan' in individual voxels, with a pattern of being nan in regions where I would expect close to (or equal to) 100% WM-likeness. I also saw that both T_G and T_C images are nan in the exact same voxels where T_W is nan. In all other brain voxels (where T_W is 0) I checked and saw that T_G and T_C summed to 1.

So, putting all of this evidence together, there's mathematically only one conclusion: all intensities of wm_norm.mif (using the naming of my previous post) must've been 0. In voxels that would've been pure WM (so no GM or CSF), the sum then becomes still 0, leading to a division by zero when computing the tissue fractions, leading to the nan values in those voxels. In other voxels, T_W will simply remain 0, and T_G and T_C will only show the relative GM-likeness and CSF-likeness relative to the total GM+CSF (with WM missing from the total). That also explains why I do see T_G and T_C in areas where I expect them to be very much present.

Long story short: your wm_norm.mif is somehow all zero for that one participant... which must either be because something went wrong when you ran

mrconvert wmfod_norm.mif wm_norm.mif -coord 3 0

This must've been some technical hiccup at your end, I reckon, since (as I mentioned above), dwi2response dhollander worked very well for that subject, so it's dataset doesn't seem to be problematic in any obvious way.

Can you check wm_norm.mif for that participant? Is it indeed entirely zero? If so, let's backtrack and check wmfod_norm.mif for that participant. Does it also show all zeros in the first volume? If that's the case, we'll have to backtrack rerunning ss3t_csd_beta1 for that participant. If we need to go all the way to that point, and it does return the same problematic output for that participant, it might be useful to look into sharing the dataset for that participant...

Cheers,
Thijs

@CallowBrainProject
Copy link
Author

Thank you for the feedback! And yes I went back and think there was an issue where the mrconvert command led to a truncated file. I re-ran everything and it looks like it all worked correctly!

@thijsdhollander
Copy link
Collaborator

No worries; sounds great!

As a small aside, because I realised (from your images) your dMRI spatial resolution is quite anisotropic and "low" along one dimension, you might be slightly challenged in pulling out (significant) effects from a hippocampus region; because it's a small and somewhat "intricate" structure. Both spatial alignment and partial voluming in general will pose the greatest challenges here.

However, some of this can come down to your eventual regridding strategy. At the moment, I suppose you're either regridding your hippocampus ROI (segmentation) to the anisotropic resolution and grid of the dMRI final T_C (and other) maps? The other choice here is to regrid the T_C (and other) maps to the grid and spatial resolution of the hippocampus segmentation image. However, there's yet another alternative: you can also regrid your fully preprocessed dMRI data (to the grid/resolution of the hippocampus segmentation image) right before you perform SS3T-CSD. The difference between regridding the final maps (e.g. T_C) or regridding at a stage right before SS3T-CSD is that the latter has a slight benefit because SS3T-CSD comes with some constraints (e.g. non-negativity) that can effectively bring some extra information to the model outcome. This is not super-resolution in the classical sense, but technically, you could argue it actually is a form of super-resolution though. See also some of the documentation I wrote here: https://3tissue.github.io/doc/single-subject.html ...under the "Optional: upsampling or regridding" section. Of course this would increase the time to run SS3T-CSD accordingly (and likely drastically), as well as mtnormalise. Note that in that documentation, the example command is just to upsample to a given resolution; you'd have to adapt that to use the -template option (just like the other command for the mask in a way), to regrid to the grid of your hippocampus segmentation/ROI.

In any case, I'd recommend to just regard this as an aside. It might be that your current approach already provides you with a good result (and you're likely quite close to being able to assess that, now that you have the final maps). But if you feel you're missing out on a result and it might come down to the precision of all steps in the pipeline, then this alternate regridding strategy might make a difference.

@CallowBrainProject
Copy link
Author

Thank you Thjis! That is actually extremely helpful. I recently applied 3 tissue CSD to this data set where I found significant FA and RD previously changes but found no TC,TG,or TW changes in the hippocampus measuring in the same space. However, I think I will try your proposed regridding approaches to the newly processed data and see if something shows up. My understanding now is I can’t simply use mregrid to 2 or 1.5, I have to regrid it to my hippocampal template. Currently I have been using ANTS to warp my hipp ROI from freesurfer to diffusion space. So I guess the best approach would be to first regrid dwi after processing (to what resolution? Vowel 1.5 in mrgrid?) and then warp ROI and then calculate SS3T and everything else?

Thanks again for all this in depth guidance!

@thijsdhollander
Copy link
Collaborator

Thank you Thjis! That is actually extremely helpful.

No worries!

I recently applied 3 tissue CSD to this data set where I found significant FA and RD previously changes but found no TC,TG,or TW changes in the hippocampus measuring in the same space.

That would be quite odd indeed, unless the differences would be really only in fibre "geometry", i.e. angles of crossing, etc... where FA (and to a lesser extent RD) would be sensitive to this, but in the 3-tissue model this ends up in the shape of the WM FOD (not in the sizes of the TW, TG and TC signal fractions). But this would be very unusual, especially for this to show a relevant difference between populations... so I wouldn't bet this is the reality.

To assess the full picture, it's always a good idea to make sure the same procedure is followed as much as possible fora all analyses. So in this case, if you're thinking of performing this spatial re-sampling to a different voxel size (e.g. that of your segmentations), it'd be good to essentially:

  1. First perform all relevant dMRI preprocessing for all subjects / datasets, i.e. denoising / unringing / motion-distortion corrections / optionally bias field correction / compute brain masks / estimate individual subject response functions / average response functions to get group-wise WM-GM-CSF response functions
  2. Then upsample / resample the spatial resolution
  3. Then compute all relevant models and metrics on the upsampled data: e.g. DTI via dwi2tensor followed by FA, RD, .. via tensor2metric ; and SS3T-CSD followed by computing TW, TG, TC as above (taking care you don't end up with unusual extreme values in some individual voxels of course).
  4. Compute average metrics across your ROI(s)

And then inspect those e.g. using box plots or similar to get an insight into the effects that might be present between groups. But in this way, those effects come from the same spatial resolution (resampling) and common ROI definition matching that. This will also rule out that any differences you observe for some, but not other, metrics aren't due to some kind of bias created by "inconsistent" processing... if that makes sense. 🙂

However, I think I will try your proposed regridding approaches to the newly processed data and see if something shows up. My understanding now is I can’t simply use mregrid to 2 or 1.5, I have to regrid it to my hippocampal template.

Different approaches are valid an possible, but since eventually the spatial resolution (and even exact spatial voxel grid!) will have to match between both so you can sample the relevant voxels for e.g. averaging, it makes sense to resample to a matching grid "from the start, or at the earliest opportunity. This avoids multiple resamplings, which each time decrease your precision more and more.

Currently I have been using ANTS to warp my hipp ROI from freesurfer to diffusion space.

Ok, so the hippocampal ROIs are indeed already in "diffusion space": the ROIs match the corresponding anatomical structure in the diffusion data (which you can indeed check in mrview, as it shows everything in what we call "scanner space", allowing to view images relative to each other spatially, even if they don't share the same grid or resolution). So after having warped those ROIs with ANTS, they now have a certain grid and resolution. On the side, also make sure these regions are binary masks at this stage (for later on). If this ROI image resolution is reasonable (and preferably isotropic), we can then regrid the dMRI data right before the SS3T-CSD step to this grid.

However, I have the feeling (from your description) that you warped these ROIs not only to (what I would call) the diffusion MRI space, but also already resampled it to the dMRI grid and resolution... is that correct? That would mean your ROIs actually have quite poor precision, because your dMRI data has such anisotropic voxels. So what to do next, depends on your scenario: did you indeed resample the ROIs already to the dMRI grid?

If you did indeed resample the ROIs, then the strategy would be to first upsample the dMRI data to a resolution of your choosing, e.g. 1.5 mm isotropic would be a reasonable choice. And then follow that up with what you did originally with ANTS for registering and warping/resampling the ROIs to the dMRI data, but do that fresh now, and warp/resample it to the new resampled dMRI data, which would have e.g. 1.5 mm isotropic resolution at this stage.

However, if the above is not true (but I think it is true though, see below at my last comment!), and your warped ROIs do have their own isotropic and good resolution—let's just say for argument's sake that they'd have 1mm isotropic resolution—then we'd do something else. Actually, more or less the other way around then: you'd then just leave these warped ROIs with decent resolution and anatomical overlap with dMRI data as is... and regrid the dMRI data not to a "resolution of your choosing", but to the explicitly provided grid of the ROI image. You'd also do this using mrgrid, but rather than choosing your resolution, you'd provide the ROI image via the -template option. That tells it to adopt the ROI image's grid, and resample (without spatial transformation) the dMRI data to its grid. As mentioned above, you'd do this to the dMRI data preprocessed up until just before the SS3T-CSD step. Once regridded, apply SS3T-CSD as well as any other model/metric you're after; and average / analyse them all in the corresponding ROIs. 🙂

So I guess the best approach would be to first regrid dwi after processing (to what resolution? Vowel 1.5 in mrgrid?) and then warp ROI and then calculate SS3T and everything else?

So even though I summarised both approaches above separately, from this latter bit I'm more strongly even suspecting it's the first option that applies. So first regrid, and then apply all other steps for the ROIs "fresh", so it ends up on your newly defined grid. As to the resolution, as I mentioned above, I think 1.5mm isotropic is a reasonable choice.

Finally, do you already have warps you'd used in ANTs before to warp ROIs to dMRI images? It might be worth to re-do the registration (I'm treating registration, which yields warps, and applying those warps as separate steps) to the upsampled data as well there; this might provide more precise alignment then too. In the end, this is what it's all about here: you want spatial precision, both in alignment and resolution, so the averaging in the ROI itself ends up being a precise measurement.

Thanks again for all this in depth guidance!

No worries! Yes, I appreciate there's a lot of (sometimes intricate) considerations to make, even for something "as simple as" a ROI-based analysis. The order of different steps (e.g. resampling) is often not focused on, but it can have a huge impact on precision of the final numbers that are pulled out and analysed.

@CallowBrainProject
Copy link
Author

Thjis, one last question. What is the benefit of upsampling before performing tissue response estimation? When you upsample you can't gain any additional information since you only sample one data point for each voxel? How does upsampling to multiple voxels benefit the analysis process?

@thijsdhollander
Copy link
Collaborator

What is the benefit of upsampling before performing tissue response estimation?

Before tissue response function estimation specifically, there's not really any benefits, since the final response functions are merely an average over selected voxels. Here its best to stick to the original resolution even, for speed and to simply assess all "original" data samples as good as possible. There's more than enough voxels anyway at this step, since we're only after a single final WM, GM and CSF response.

However (and I'm guessing maybe you were referring to this...?), to upsample before 3-tissue CSD, e.g. SS3T-CSD, specifically does have some benefit. Because...

When you upsample you can't gain any additional information since you only sample one data point for each voxel?

...well, you don't gain new information at the upsampling step "itself": it only performs interpolation of nearby voxels to estimate the intensities at the new voxel positions (of e.g. a higher resolution grid). This is just a weighted linear combination of those intensities; that's all interpolation does.

However, SS3T-CSD itself is not a linear process. The forward model is linear, but the inversion / fitting procedure in practice does bring in (non-linear) constraints: the non-negativity requirement of the full angular WM FOD, as well as non-negativity of GM and CSF compartments too. This might not seem much at first sight, but this is slightly deceiving: this is actually a very (very!) strong constraint that brings a lot of information to the table; mainly because the WM FOD is typically very sparse in the angular domain (so a lot of the angular domain is indeed "supposed to be" zero; which is in practice also realised by the non-negativity constraint!). This is partially also even the reason why SS3T-CSD specifically can effectively work in practice.

So long story short here: the non-negativity constraint brings a lot of information in practice, if you look at the whole thing from an information theory point of view. This is why it can result in a valuable piece of new information even when applied to merely interpolated intensities; i.e. after upsampling the dMRI data. In practice, I found that this results in a sharper contrast between tissue types. Whereas just upsampling the WM-GM-CSF maps themselves would result in a typically blocky/blurry image, upsampling before SS3T-CSD and looking at those same WM-GM-CSF outputs at an upsampled resolution shows a very crisp contrast, with e.g. tissue boundaries that effectively follow expected patterns (e.g. for the WM-GM boundary).

Technically, this is a form of super-resolution: you gain from additional spatial information via some prior knowledge. In practice, this case isn't typically referred to as super-resolution though, merely because that term is already more commonly used for different approaches to gain high spatial resolution from a set of lower resolution images with different anisotropic grid orientations.

By the way, this effect has also been described before for FA maps from DTI; see this reference. While this is not an SS3T-CSD example, the principle shares some similarities (the DTI fit and FA computations are also not linear; so order of upsampling versus DTI fit / FA computation again makes a difference).

How does upsampling to multiple voxels benefit the analysis process?

So well, you've got a higher resolution sharper contrast then. The impact depends on what you use that for. For things like fixel-based analyses or anything that requires registration (or population template building) the registration will benefit and yield in turn an increased precision result. Tractography might also benefit (although from my observations, that seems to be very limited though; this likely because tractography itself already has strong spatial assumptions of e.g. streamline smoothness due to constraints on angles between steps). Registration to other contrasts, or from other contrast to the diffusion data likely also benefits (i.e. aligning your ROIs to your data via an intermediate contrast!). And finally, within those ROIs with more intricate small details in shape (e.g. hippocampus for sure), you'll hopefully get a more precise sampling of voxels as well.

Apart from analysis and precision motivations, visualisation of maps will of course also be sharper; which can help in visual (e.g. clinical) assessment of relevant contrasts.

@thijsdhollander
Copy link
Collaborator

Just to add to the above about the benefits of upsampling before SS3T-CSD: I recently stumbled again on a few figures I generated at some point to use in a slide deck that I regularly use to illustrate this point. Here's a tissue-encoded colour (TEC) map and an FOD-based DEC map (via fod2dec), without upsampling, and with upsampling either after or before SS3T-CSD:

Original resolution:

TEC DEC
TEC DEC

TEC from SS3T-CSD results that are upsampled, as well as from SS3T-CSD on upsampled data:

Upsampling after SS3T-CSD Data upsampled before SS3T-CSD
after before

Fod-based DEC from SS3T-CSD results that are upsampled, as well as from SS3T-CSD on upsampled data:

Upsampling after SS3T-CSD Data upsampled before SS3T-CSD
after before

Note how in the upsampled cases, the result looks quite a bit sharper and reveals more detail when upsampling is performed on the data before the SS3T-CSD modelling step. Of course, this comes at the cost of having to run SS3T-CSD on far more voxels; so it'll take much longer accordingly.

@CallowBrainProject
Copy link
Author

Hello Thjis,

I recently used your technique in a manuscript looking at hippocampal diffusivity and a reviewer asked the following question...

"With regard to 2.5 "Diffusion-Weighted Image Processing": I would be curious to know why the resolution was up-sampled to 1.5 mm isotropic voxels? "

I know this is what you went into detail describing above however, I want to make sure I answer this in a thorough but also simple to follow manner. For reference, I used the technique before tissue response estimation.

Any help you can provide is greatly appreciated!

@thijsdhollander
Copy link
Collaborator

Hi Daniel,

Great to hear! We get this question a lot in our own manuscripts, typically fixel-based analyses. The gist of the motivation for upsampling is to improve spatial (intensity) contrast. The strict requirement to achieve this is to upsample before 3-tissue CSD (it doesn't have to be before response function estimation though; but that makes little difference regardless). The non-linear constraints within the 3-tissue CSD technique essentially introduce information that "naive" interpolation itself could otherwise not achieve... so in principle it's a little bit of super-resolution (but not in the traditional sense).

Since we got this question so often, as well as some of those other typical questions that pop up from at least one reviewer each time, we've done an effort to document some of these things in our recent somewhat massive fixel-based analysis review paper; currently available in preprint online here: https://osf.io/zu8fv/ . While I of course recommend a full read (for your enjoyment 😉 ), the bit you're looking for is in the supplemental materials, more specifically in Supplementary Document 1 (fixel-based analysis pipeline steps). The review paper is going through review itself at the moment, but for the time being it might be helpful to use the preprint as a citation to support / justify e.g. the upsampling step.

Hope that helps!

Cheers,
Thijs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feedback Feedback
Development

No branches or pull requests

2 participants