Implementation of paper [arXiv]:
MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis" Trinh Thi Le Vuong and Jin Tae Kwak.
Overview of distillation flow across different tasks and datasets. 1) Supervised task is always conducted, 2) Feature distillation is applied if a well-trained teacher model is available, and 3) Vanilla
Overview of distillation flow across different tasks and datasets. 1) Supervised task is always conducted, 2) Feature distillation is applied if a well-trained teacher model is available, and 3) Vanilla
./scripts/run_vanilla.sh
If the student and teacher dataset vary in number of categories, you may need to use "--std_strict, --tec_strict".
./scripts/run_moma.sh
./scripts/run_comparison.sh