Pytorch implementation for paper Deep learning – Radiologist Complementarity Decision System for Liver Fibrosis Staging on CT: A multi-center study (under review)
This project includes four folders:
- data: store the data used in this project (for both segmentation and staging tasks).
- segmentation: source code for liver and spleen segmentation.
- staging: source code for liver fibrosis staging.
- UI: source code for UI development for this AI-assistance system, which integrates both liver & spleen segmentation and liver fibrosis staging models.
We provide an example data for implementations of both segmentation and staging tasks, which is stored in the folder of data. Moreover, we provide a .yaml file to quick setup the environment.
This AI tool is supported for Linux and Windows. This AI tool has been tested on the following systems:
- Linux: Ubuntu 18.04
- Windows: Windows 10, version 20H2
Package Version
---------------------- -------------------
torch 1.4.0
torchvision 0.5.0
h5py 3.1.0
opencv-python 4.5.2.52
SimpleITK 2.0.2
scikit-image. 0.17.2
ml-collections 0.1.1
tensorboardx 2.2.0
medpy 0.4.0
scikit-learn 0.24.2
pandas 1.1.5
PyQt5 5.9.2
We recommend using conda to quick setup the environment. This takes about 5 min to build. Please use the following commands (in this example we use Linux).
conda env create -f environment.yaml
conda activate LFSS
- For the training data of liver and spleen segmentation, put all the raw images (img_raw.nii) and their segmentation ground truths (gt.nii) in the folder "/data/train_seg".
- For the training data of liver fibrosis staging, put all the raw images (img_raw.nii) and their segmentation ground truths (gt.nii) in the folder "/data/train_cls/S*", where "*" denotes each sample's stage of liver fibrosis.
- For the test data, put all the raw images (img_raw.nii) in the folder "/data/test".
The segmentation results (gt.nii) can be generated via our human-in-the-loop strategy. The segmentation networks are based on UMMKD and TransUNet.
Then, all the data can be stored like this:
.
├── data
├──test
├── R0001
├── img_raw.nii
└── gt.nii
├── R0002
└──...
├──train_cls
├── S0
├── R0003
├── img_raw.nii
└── gt.nii
├── R0004
└── ...
├── S1
├── S2
├── S3
└── S4
└──train_seg
├── R0005
├── img_raw.nii
└── gt.nii
├── R0006
└──...
cd ./segmentation/code
python preprocessing_train.py
python preprocessing_test.py
- Get models in this link: R50-ViT-B_16, ViT-B_16, ViT-L_16...
wget https://storage.googleapis.com/vit_models/imagenet21k/{MODEL_NAME}.npz &&
mkdir ./models_save/vit_checkpoint/imagenet21k &&
mv {MODEL_NAME}.npz ./models_save/vit_checkpoint/imagenet21k/{MODEL_NAME}.npz
python train.py
python test.py
- It takes about 3 min to complete both training phase and testing phase on the example data (just 1 sample), while takes about 12 hour to train the model on the training set used in our paper.
cd ./staging/code
python preprocessing_train.py
python preprocessing_test.py
python train.py
python test.py
- It takes about 3 min to complete both training phase and testing phase on the example data (just 1 sample), while takes about 1 hour to train the model on the training set used in our paper.
After training the segmentation and classification models, then you can use the trained model to build an AI-assistance software. The code of UI development is in the folder "UI". You can run the code of AI-assistance software by:
cd ./UI
python main.py
The 3D-image visualization is based on the code simpleITK-Snap. The demo of this AI-assistance software can be seen as below:
The code for this section is in the folder "DRCDS". The code is referenced from CoDoC. Here, we have provided example data for training, validation, and testing in the /DRCDS/data/internal.
This project is covered under the Apache 2.0 License.
For any questions, please contact '[email protected]'