The data and code for the paper "AISCT-SAM: A Clinical Knowledge-Driven Fine-Tuning Strategy for Applying Foundation Model to Fully Automatic Acute Ischemic Stroke Lesion Segmentation on Non-Contrast CT Scans" submitted to IEEE ICASSP 2025.
CUDA 11.7
Python 3.10.13
Pytorch 2.0.0
Torchvision 0.15.0
batchgenerators 0.25
SimpleITK 2.3.0
scipy 1.11.3
- Install our modified nnUNet as below
git clone https://github.com/GitHub-TXZ/AISCT-SAM.git
cd AISCT-SAM
pip install -e .
AISD dataset can be downloaded from (https://github.com/griffinliang/aisd).
After converting the DICOM files of the AISD dataset to NIfTI format, perform skull stripping according to the instructions at https://github.com/WuChanada/StripSkullCT.
Then, perform flip registration according to ./myscripts/Registration. Finally, organize the dataset in the nnUNet-expected format according to the code in nnUNet/nnunet/dataset_conversion.
Some compared methods use the same pre-processing steps as nnUNet. The documentation of the pre-processing can be found at [DOC]
conda activate Simply run the following in your command line:
- Run
CUDA_VISIBLE_DEVICES=0 nnUNetv2_train -dataset_name_or_id TASK_ID -model_name AIS_SAM -ex_name Ex1@b_2_p_20_256_256_s_3.0_0.4375_0.4375
for training.
- Run
CUDA_VISIBLE_DEVICES=0 nnUNetv2_train -dataset_name_or_id TASK_ID -model_name AIS_SAM -ex_name Ex1@b_2_p_20_256_256_s_3.0_0.4375_0.4375 --val
for testing.
The pre-trained model of AISD dataset can be downloaded from [Baidu YUN] with the password "puyl".
During reproduction, for the CNN-based methods, Transformer-based methods, Hybrid-CNN-Transformer-based methods, Mamba-based mehtods. We integrated them into the nnUNet framework. All of these 3D methods can be found at [DOC].
For the AIS-Specific methods and SAM-based methods. We endeavored to implement them using our AIS datasets.our reproduced codes. The links of their open-source codes are listed as follows:
[Kuang et al.]
[UNet-RF]
[ADN]
[SAM-Med2D]
[SAM]
[SAM-Med3D]
[MedSAM]
[MSA]
[3DSAM Adapter]
[SAMed]
Note that for all compared methods, to perform fair comparisons, we used he same data split and all metrics were computed at the 3D image level.
Part of codes are reused from the nnU-Net, thanks to Fabian Isensee for the owesome codes of nnUNet. And we express our sincerest gratitude to all the awesome open-source code that we have used in our work.