We release the source code of our submission (Rank 1) for Multi-Source Domain Adaptation task in VisDA-2019. Details can be referred in Technical report.
All the pretrained models, synthetic data generated via CycleGAN , and submission files can be downloaded from the link.
You may need a machine with 4 GPUs and PyTorch v1.1.0 for Python 3.
-
Go to the
Adapt
folder -
Train source only models
bash experiments/<DOMAIN>/<NET>/train.sh
Where <DOMAIN>
is clipart or painting, <NET>
is the network (e.g. senet154
)
Then repeat the following procedures 4 times.
bash experiments/<DOMAIN>/<NET>_<phase_id>/train.sh
-
Copy the adaptation models to the folder
ExtractFeat/experiments/<phase_id>/<DOMAIN>/<NET>/snapshot
-
Extract features by running the scripts
bash experiments/<phase_id>/<DOMAIN>/scripts/<NET>.sh
- Copy the features from
experiments/<phase_id>/<DOMAIN>/<NET>/<NET>_<source_and_target_domains>/result
todataset/visda2019/pkl_test/<phase_id>/<DOMAIN>/<NET>
-
Go to the
FeatFusionTest
folder -
Train feature fusion based adaptation module
bash experiments/<phase_id>/<DOMAIN>/train.sh
- Copy the pseudo labels file to
Adapt/experiments/<DOMAIN>/<NET>_<next_phase_id>
for the next adaptation.
Please cite our technical report in your publications if it helps your research:
@article{pan2019visda,
title={Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019},
author={Pan, Yingwei and Li, Yehao and Cai, Qi and Chen, Yang and Yao, Ting},
booktitle={Visual Domain Adaptation Challenge},
year={2019}
}
Thanks to the domain adaptation community and the contributers of the pytorch ecosystem.
Pytorch pretrained-models Cadene and EfficientNet