Skip to content

[MICCAI'24] Weakly-supervised Medical Image Segmentation with Gaze Annotations

Notifications You must be signed in to change notification settings

med-air/GazeMedSeg

Repository files navigation

Weakly-supervised Medical Image Segmentation with Gaze Annotations

This is the PyTorch implementation of our MICCAI 2024 paper "Weakly-supervised Medical Image Segmentation with Gaze Annotations" by Yuan Zhong, Chenhui Tang, Yumeng Yang, Ruoxi Qi, Kang Zhou, Yuqi Gong, Pheng-Ann Heng, Janet H. Hsiao*, and Qi Dou*.

* denotes corresponding authors.

Abstract

Eye gaze that reveals human observational patterns has increasingly been incorporated into solutions for vision tasks. Despite recent explorations on leveraging gaze to aid deep networks, few studies exploit gaze as an efficient annotation approach for medical image segmentation which typically entails heavy annotating costs. In this paper, we propose to collect dense weak supervision for medical image segmentation with a gaze annotation scheme. To train with gaze, we propose a multi-level framework that trains multiple networks from discriminative human attention, simulated with a set of pseudo-masks derived by applying hierarchical thresholds on gaze heatmaps. Furthermore, to mitigate gaze noise, a cross-level consistency is exploited to regularize overfitting noisy labels, steering models toward clean patterns learned by peer networks. The proposed method is validated on two public medical datasets of polyp and prostate segmentation tasks. We contribute a high-quality gaze dataset entitled GazeMedSeg as an extension to the popular medical segmentation datasets. To the best of our knowledge, this is the first gaze dataset for medical image segmentation. Our experiments demonstrate that gaze annotation outperforms previous label-efficient annotation schemes in terms of both performance and annotation time.

Highlights

  • Public gaze dataset GazeMedSeg for segmentation as extension for the Kvasir-SEG and NCI-ISBI datasets.
  • A general plug-in framework for weakly-supervised medical image segmentation using gaze annotations.

Gaze Dataset

Please refer to here for detailed description of our GazeMedSeg dataset.

Getting Started

Installation

  1. Download from GitHub

    git clone https://github.com/med-air/GazeSup.git
    cd GazeSup
  2. Create conda environment

    conda env create -f environment.yaml
    conda activate gaze

Preparing Datasets

Note: You can download our preprocessed dataset here, allowing you to skip this and the next step to reproduce our experiments.

  • Download the Kvasir-SEG and NCI-ISBI datasets from their official websites. For NCI-ISBI, please download the Training and Test sets.
  • Preprocess the datasets. We only preprocess the NCI-ISBI dataset, extracting axial slices where the prostate is present. The relevant script is here.

Preparing Gaze Annotation

  • Download the GazeMedSeg here, and put the files under the /GazeMedSeg folder.
  • Generate gaze heatmaps and refined CRF maps using the scripts here. These scripts will create a gaze folder within the original dataset directory and generate gaze heatmaps and CRF maps there. The CRF maps will serve as pseudo-masks for gaze supervision.

Running Experiments

python run.py -m [supervision_mode] --data [dataset] --model [backbone] -bs [batch_size] \
    --exp_path [experiment_path] --root [dataset_path] --spatial_size [image_size] \
    --in_channels [image_channels] --opt [optimizer] --lr [base_lr] --max_ite [max_ite] \
    --num_levels [num_levels] --cons_mode [cons_mode] --cons_weight [cons_weight]

We provide the scripts of reproducing our experiments on the Kvasir-SEG and NCI-ISBI datasets with our gaze annotation here. For more details on the arguments, please refer to parse_args.py.

Checkpoints

We also provide the model checkpoints for the experiments as listed below (Dice is the evaluation metric).

Kvasir-SEG (Polyp) NCI-ISBI (Prostate)
Our paper 77.80 77.64
Released 78.86
[script] [checkpoint]
79.20
[script] [checkpoint]

Contact

If you have any questions, please feel free to leave issues here, or contact Yuan Zhong.

Citation

@article{zhong2024weakly,
  title={Weakly-supervised Medical Image Segmentation with Gaze Annotations},
  author={Zhong, Yuan and Tang, Chenhui and Yang, Yumeng and Qi, Ruoxi and Zhou, Kang and Gong, Yuqi and Heng, Pheng Ann and Hsiao, Janet H and Dou, Qi},
  journal={International Conference on Medical Image Computing and Computer Assisted Intervention},
  year={2024}
}

About

[MICCAI'24] Weakly-supervised Medical Image Segmentation with Gaze Annotations

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published