Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error code when running #2

Open
bmol239 opened this issue Feb 2, 2024 · 2 comments
Open

Error code when running #2

bmol239 opened this issue Feb 2, 2024 · 2 comments

Comments

@bmol239
Copy link

bmol239 commented Feb 2, 2024

Hi there, I am trying to use this software for removal of a consistent stripe artefact in all b0 and DWI's .

The code I ran was: docker run --rm --volume /data/bmol239/Analysis/trouble-shooting:/data/bmol239/Analysis/trouble-shooting maxpietsch/dstripe:1.1 dwidestripe /data/bmol239/Analysis/trouble-shooting/dwi.mif /data/bmol239/Analysis/trouble-shooting/mask.mif /data/bmol239/Analysis/trouble-shooting/dstripe_field.mif -device cpu -corrected dwi_ds.mif -debug

The error message :

MRLoader load_np_funs: {'source': <function get_all at 0x7f722fa40d40>, 'target': <function get_all at 0x7f722fa40d40>}
MRLoader load_sample_funs: <function split_by_vol at 0x7f722fa3c200>
loading data, cropped to mask: False
{'mask_source': 'nn/mask.mif', 'source': 'nn/amp.mif'}
Traceback (most recent call last):
File "/opt/dStripe/dstripe/eval_stripes3.py", line 495, in
transforms_val=SampleToTensor4D(), nsamples=0, poverride_dict=poverride_dict)
File "/opt/dStripe/dstripe/trainer.py", line 525, in predict_val
self.__load_validation_data(transforms=transforms_val)
File "/opt/dStripe/dstripe/trainer.py", line 197, in __load_validation_data
self.__load_data('val', transforms)
File "/opt/dStripe/dstripe/trainer.py", line 180, in __load_data
memmap=self.p.dict.get('memmap', False))
File "/opt/dStripe/dstripe/dataloader2.py", line 143, in init
for _im, _md in self.__postproc(imdata, md):
File "/opt/dStripe/dstripe/dataloader2.py", line 184, in __postproc
for imdat, mdat in gen:
File "/opt/dStripe/dstripe/dwitools.py", line 135, in gen
im, md = normalise_fun(im, md)
File "/opt/dStripe/dstripe/dwitools.py", line 94, in normalise_fun
im['source'] *= v
numpy.core._exceptions.UFuncTypeError: Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('int16') with casting rule 'same_kind'
dwidestripe: /opt/env/bin/python3 /opt/dStripe/dstripe/eval_stripes3.py /opt/dStripe/models/dstripe_2019_07_03-31_v2.pth.tar.json nn/amp.mif nn/mask.mif --butterworth_samples_cutoff=0.65625 --outdir=/dwidestripe-tmp-K0VX2U/ --verbose=0 --batch_size=1 --write_field=true --write_corrected=false --slice_native=false --attention --device=cpu

dwidestripe: [ERROR] failed with return code 1

@johnaeanderson
Copy link

I am also getting this error

@johnaeanderson
Copy link

I've tried running this using the compiled Docker version, and I've tried building my own docker image (updating the nvidia/cuda drivers). It doesn't seem to matter - every time I get this same [ERROR] failed with return code 1. I'm sure I have enough memory on this computer and enough space - the hardware is not the issue. I've tried the GPU accelerated version and the CPU version. I've tried converting the FSL formatted nifti images and bvec and bval files to MRTRIX format apriori rather than using the fslgrads command. Nothing seems to work.

I love the concept of this software, and it's a badly needed solution for some scans I have, but I can't get it to work.

johnanderson@JohnAnderson-Lambda:/data/dStripe$ docker run --rm \
    -v /data/BEELAB/Adults_2020/bids/derivatives/qsiprep/sub-A001/ses-01/dwi:/input \
    -v /data/BEELAB/Adults_2020/bids/derivatives/dstripe/sub-A001:/output \
    -w /output \
    dstripe \
    dwidestripe /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.mif \
                /input/sub-A001_ses-01_space-T1w_desc-brain_mask.mif \
                /output/field.mif \
                -corrected /output/corrected_dwi.mif \
                -fslgrad /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bvec \
                         /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bval \
                -nthreads 1
dwidestripe: 
dwidestripe: Note that this script makes use of commands / algorithms that have relevant articles for citation. Please consult the help page (-help option) for more information.
dwidestripe: 
dwidestripe: model: dstripe_2019_07_03-31_v2
dwidestripe: Generated scratch directory: /output/dwidestripe-tmp-YM70X4/
dwidestripe: Changing to scratch directory (/output/dwidestripe-tmp-YM70X4/)
Command:  mrconvert /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.mif /output/dwidestripe-tmp-YM70X4/nn/amp.mif -fslgrad /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bvec /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bval -strides 0,1,2,3,4 -export_grad_mrtrix /output/dwidestripe-tmp-YM70X4/grad
Command:  mrconvert /input/sub-A001_ses-01_space-T1w_desc-brain_mask.mif /output/dwidestripe-tmp-YM70X4/nn/mask.mif -strides 0,1,2,3
Traceback (most recent call last):
  File "/opt/dStripe/dstripe/eval_stripes3.py", line 264, in <module>
    cuda, device, gpu_ids = get_device(args)
  File "/opt/dStripe/dstripe/eval_stripes3.py", line 78, in get_device
    assert torch.cuda.is_available(), args.device
AssertionError: 0
dwidestripe: /opt/env/bin/python3 /opt/dStripe/dstripe/eval_stripes3.py /opt/dStripe/models/dstripe_2019_07_03-31_v2.pth.tar.json nn/amp.mif nn/mask.mif --butterworth_samples_cutoff=0.65625 --outdir=/output/dwidestripe-tmp-YM70X4/ --verbose=0 --batch_size=1 --write_field=true --write_corrected=false --slice_native=false --attention --nthreads=1 --device=0

dwidestripe: [ERROR] failed with return code 1
johnanderson@JohnAnderson-Lambda:/data/dStripe$ docker run --rm \
    -v /data/BEELAB/Adults_2020/bids/derivatives/qsiprep/sub-A001/ses-01/dwi:/input \
    -v /data/BEELAB/Adults_2020/bids/derivatives/dstripe/sub-A001:/output \
    -w /output \
    dstripe \
    dwidestripe /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.mif \
                /input/sub-A001_ses-01_space-T1w_desc-brain_mask.mif \
                /output/field.mif \
                -corrected /output/corrected_dwi.mif \
                -fslgrad /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bvec \
                         /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bval \
                -device cpu \
                -nthreads 1
dwidestripe: 
dwidestripe: Note that this script makes use of commands / algorithms that have relevant articles for citation. Please consult the help page (-help option) for more information.
dwidestripe: 
dwidestripe: model: dstripe_2019_07_03-31_v2
dwidestripe: Generated scratch directory: /output/dwidestripe-tmp-H6OCVH/
dwidestripe: Changing to scratch directory (/output/dwidestripe-tmp-H6OCVH/)
Command:  mrconvert /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.mif /output/dwidestripe-tmp-H6OCVH/nn/amp.mif -fslgrad /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bvec /input/sub-A001_ses-01_space-T1w_desc-preproc_dwi.bval -strides 0,1,2,3,4 -export_grad_mrtrix /output/dwidestripe-tmp-H6OCVH/grad
Command:  mrconvert /input/sub-A001_ses-01_space-T1w_desc-brain_mask.mif /output/dwidestripe-tmp-H6OCVH/nn/mask.mif -strides 0,1,2,3
loading: /output/dwidestripe-tmp-H6OCVH/dstripe_2019_07_03-31_v2.pth.tar.json_val
destripe_weight: 0.5
constancy_weight: 0.5
'self.scheduler:<utils.learn.CyclicLR object at 0x796f98a3ff90>'
=> loading checkpoint '/opt/dStripe/models/dstripe_2019_07_03-31_v2.pth.tar'
=> loaded checkpoint '/opt/dStripe/models/dstripe_2019_07_03-31_v2.pth.tar' (epoch 499, best loss 0.001187756218671102)
MRLoader load_np_funs: {'source': <function get_all at 0x796f98a3ad40>, 'target': <function get_all at 0x796f98a3ad40>}
MRLoader load_sample_funs: <function split_by_vol at 0x796f98a3e200>
loading data, cropped to mask: False
{'mask_source': 'nn/mask.mif', 'source': 'nn/amp.mif'}
loading of 1 images in 68 shards done
val_loader:
{'num_cached_per_queue': 2,
 'num_processes': 1,
 'pin_memory': True,
 'seeds': [3478668043],
 'transform': <dataloader.SampleToTensor4D object at 0x796f9994cb50>}
normalising: percentile-scale validation data

/output/dwidestripe-tmp-H6OCVH/eval_dstripe_2019_07_03-31_v2/dstripe_2019_07_03-31_v2.pth.tar.json_val
Traceback (most recent call last):/eval_dstripe_2019_07_03-31_v2/nn-amp_field_vol_00000_2.mif
  File "/opt/dStripe/dstripe/eval_stripes3.py", line 541, in <module>
    for ibatch, sample in enumerate(datagen):
  File "/opt/env/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 205, in __next__
    item = self.__get_next_item()
  File "/opt/env/lib/python3.7/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 189, in __get_next_item
    raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of "
RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full
dwidestripe: /opt/env/bin/python3 /opt/dStripe/dstripe/eval_stripes3.py /opt/dStripe/models/dstripe_2019_07_03-31_v2.pth.tar.json nn/amp.mif nn/mask.mif --butterworth_samples_cutoff=0.65625 --outdir=/output/dwidestripe-tmp-H6OCVH/ --verbose=0 --batch_size=1 --write_field=true --write_corrected=false --slice_native=false --attention --nthreads=1 --device=cpu

dwidestripe: [ERROR] failed with return code 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants