Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation results and "train Vs train_auto" #27

Open
carlotita22 opened this issue Oct 5, 2023 · 2 comments
Open

Segmentation results and "train Vs train_auto" #27

carlotita22 opened this issue Oct 5, 2023 · 2 comments

Comments

@carlotita22
Copy link

Hi! how can I see the results of the segmentations, I am adding a code inside the model_predict function (in test_py):

for i, prediction in enumerate(seg_pred):
            nifti_img = nib.Nifti1Image(np.zeros_like(prediction), affine=None)    
             nifti_img.get_data()[:] = prediction.astype(np.float32)   
            output_filename = f'prediction_{i}.nii.gz'  
            nib.save(nifti_img, 'nifti_mask_test')

But I am not clear if seg_pred corresponds to the final mask predicted by the network. But I think I'm wrong, because I get the following error. How can I do it?

[22:12:25.424] Namespace(data='myocardium', snapshot_path='path/to/snapshot/myocardium', data_prefix='path/to/data folder/', rand_crop_size=(128, 128, 128), device='cuda:0', num_prompts=1, batch_size=1, num_classes=2, num_worker=6, checkpoint='last', tolerance=5)
Traceback (most recent call last):
File "/mnt/workspace/cgrivera/3DSAM-adapter/3DSAM-adapter/3DSAM-adapter/test.py", line 301, in
main()
File "/mnt/workspace/cgrivera/3DSAM-adapter/3DSAM-adapter/3DSAM-adapter/test.py", line 118, in main
torch.load(os.path.join(args.snapshot_path, file), map_location='cpu')["feature_dict"][i], strict=True)
KeyError: 'feature_dict'.

Thanks in advance,
Regards!

@carlotita22
Copy link
Author

up U.u

@peterant330
Copy link
Collaborator

peterant330 commented Oct 11, 2023

Hi,

According to your error message, it seems the error has nothing to do with the code you added. It is because the pre-trained checkpoint you saved during training has a different format than that you used during testing. Looks like you are using train_auto.py to train while test.py to test so that the prompt encoder is not contained in your checkpoint while your inference needs this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants