Skip to content

Latest commit

 

History

History
68 lines (50 loc) · 2.15 KB

README.md

File metadata and controls

68 lines (50 loc) · 2.15 KB

OVANet Overview

This repository provides code for the paper. Please go to our project page to quickly understand the content of the paper or read our paper.

Environment

Python 3.6.9, Pytorch 1.6.0, Torch Vision 0.7.0, Apex. We used the nvidia apex library for memory efficient high-speed training.

Data Preparation

Datasets

Office Dataset, OfficeHome Dataset, VisDA, DomainNet, NaBird

Prepare dataset in data directory.

./data/amazon/images/ ## Office
./data/Real ## OfficeHome
./data/visda_train ## VisDA synthetic images
./data/visda_val ## VisDA real images
./data/dclipart ## DomainNet # We add 'd' for all directories of DomainNet to avoid confusion with OfficeHome.
./data/nabird/images ## Nabird

File splits

File lists (txt files)

File list need to be stored in ./txt, e.g.,

./txt/source_amazon_opda.txt ## Office
./txt/source_dreal_univ.txt ## DomainNet
./txt/source_Real_univ.txt ## OfficeHome
./txt/nabird_source.txt ## Nabird
.
.
.

Training and Evaluation

All training scripts are stored in script directory.

Ex. Open Set Domain Adaptation on Office.

sh scripts/run_office_obda.sh $gpu-id train.py

Reference

This repository is contributed by Kuniaki Saito. If you consider using this code or its derivatives, please consider citing:

@article{saito2021ovanet,
  title={OVANet: One-vs-All Network for Universal Domain Adaptation},
  author={Saito, Kuniaki and Saenko, Kate},
  journal={arXiv preprint arXiv:2104.03344},
  year={2021}
}