Skip to content

Commit

Permalink
Merge pull request #876 from marrlab/doc_example
Browse files Browse the repository at this point in the history
put all examples in doc_example.md into separate docs
  • Loading branch information
smilesun authored Sep 17, 2024
2 parents 77b64d6 + c46205b commit 588fff9
Show file tree
Hide file tree
Showing 12 changed files with 203 additions and 186 deletions.
5 changes: 0 additions & 5 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,11 +168,6 @@
"internal": True,
"title": "Examples with MNIST",
},
{
"href": "doc_examples",
"internal": True,
"title": "More commandline examples",
},
{"href": "doc_benchmark", "internal": True, "title": "Benchmarks tutorial"},
{"href": "doc_output", "internal": True, "title": "Output Structure"},
{
Expand Down
17 changes: 16 additions & 1 deletion docs/docDIAL.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,23 @@ This procedure yields to the following availability of hyperparameter:
- `--gamma_reg`: ? ($\epsilon$ in the paper)
- `--lr`: learning rate ($\alpha$ in the paper)

# Examples
## Examples

```
python main_out.py --te_d=0 --task=mnistcolor10 --model=erm --trainer=dial --nname=conv_bn_pool_2
```



```shell
python main_out.py --te_d=0 --task=mnistcolor10 --keep_model --model=erm --trainer=dial --nname=conv_bn_pool_2
```
### Train DIVA model with DIAL trainer

```shell
python main_out.py --te_d 0 1 2 --tr_d 3 7 --task=mnistcolor10 --model=diva --nname=conv_bn_pool_2 --nname_dom=conv_bn_pool_2 --gamma_y=7e5 --gamma_d=1e5 --trainer=dial
```
### Set hyper-parameters for trainer as well
```shell
python main_out.py --te_d 0 1 2 --tr_d 3 7 --task=mnistcolor10 --model=diva --nname=conv_bn_pool_2 --nname_dom=conv_bn_pool_2 --gamma_y=7e5 --gamma_d=1e5 --trainer=dial --dial_steps_perturb=1
```
2 changes: 1 addition & 1 deletion docs/docFishr.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ For more details, see the reference below or the domainlab code.



# Examples
## Examples
```
python main_out.py --te_d=0 --task=mini_vlcs --model=erm --trainer=fishr --nname=alexnet --bs=2 --nocu
```
Expand Down
51 changes: 42 additions & 9 deletions docs/docHDUVA.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,38 @@ Alternatively, one could use an existing neural network in DomainLab using `nnam

## Hyperparameter for warmup
Finally, the number of epochs for hyper-parameter warm-up can be specified via the argument `warmup`.



Please cite our paper if you find it useful!
```text
@inproceedings{sun2021hierarchical,
title={Hierarchical Variational Auto-Encoding for Unsupervised Domain Generalization},
author={Sun, Xudong and Buettner, Florian},
booktitle={ICLR 2021 RobustML workshop, https://arxiv.org/pdf/2101.09436.pdf},
year={2021}
}
```



## Examples

### hduva use custom net for sandwich encoder
```shell
python main_out.py --te_d=caltech --bs=2 --task=mini_vlcs --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --npath_encoder_sandwich_x2h4zd=examples/nets/resnet.py
```

### hduva use custom net for topic encoder
```shell
python main_out.py --te_d=caltech --bs=2 --task=mini_vlcs --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --npath_encoder_x2topic_h=examples/nets/resnet.py --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```

### hduva use custom net for classification encoder
```shell
python main_out.py --te_d=caltech --bs=2 --task=mini_vlcs --model=hduva --npath=examples/nets/resnet.py --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```

### use hduva on color mnist, train on 2 domains
```shell
python main_out.py --tr_d 0 1 2 --te_d 3 --bs=2 --task=mnistcolor10 --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
Expand All @@ -63,14 +94,16 @@ python main_out.py --tr_d 0 1 2 --te_d 3 --bs=2 --task=mnistcolor10 --model=hduv
python main_out.py --tr_d 0 --te_d 3 4 --bs=2 --task=mnistcolor10 --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```

### hduva with implemented neural network
```shell
python main_out.py --te_d=caltech --bs=2 --task=mini_vlcs --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```


Please cite our paper if you find it useful!
```text
@inproceedings{sun2021hierarchical,
title={Hierarchical Variational Auto-Encoding for Unsupervised Domain Generalization},
author={Sun, Xudong and Buettner, Florian},
booktitle={ICLR 2021 RobustML workshop, https://arxiv.org/pdf/2101.09436.pdf},
year={2021}
}
### hduva use alex net
```shell
python main_out.py --te_d=caltech --bs=2 --task=mini_vlcs --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=alexnet
```




2 changes: 1 addition & 1 deletion docs/docIRM.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ where $\lambda$ is a hyperparameter that controls the trade-off between the empi
In practice, one could simply divide one mini-batch into two subsets, let $i$ and $j$ to index these two subsets, multiply subset $i$ and subset $j$ forms an unbiased estimation of the L2 norm of gradient.
In detail: the squared gradient norm via inner product between $\nabla_{w|w=1} \ell(w \circ \Phi(X^{(d, i)}), Y^{(d, i)})$ of dimension dim(Grad) with $\nabla_{w|w=1} \ell(w \circ \Phi(X^{(d, j)}), Y^{(d, j)})$ of dimension dim(Grad) For more details, see section 3.2 and Appendix D of : Arjovsky et al., “Invariant Risk Minimization.”

# Examples
## Examples
```shell
python main_out.py --te_d=0 --task=mnistcolor10 --model=erm --trainer=irm --nname=conv_bn_pool_2

Expand Down
14 changes: 14 additions & 0 deletions docs/docJiGen.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,17 @@ Furthermore, the user can specify a custom grid length via `grid_len`.

_Reference_: Carlucci, Fabio M., et al. "Domain generalization by solving jigsaw puzzles."
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.


## Examples

### model jigen with implemented neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=8 --model=jigen --nname=alexnet --pperm=1 --nperm=100 --grid_len=3
```


### sannity check with jigen tile shuffling
```shell
python main_out.py --te_d=sketch --tpath=examples/tasks/demo_task_path_list_small.py --debug --bs=8 --model=jigen --nname=alexnet --pperm=1 --nperm=100 --grid_len=3 --san_check
```
26 changes: 26 additions & 0 deletions docs/docMatchDG.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,29 @@ This procedure yields to the following availability of hyperparameter:
- `--epochs_ctr`: number of epochs for minimizing the contrastive loss in phase 1.
- `--epos_per_match_update`: Number of epochs before updating the match tensor. ($t$ from phase 1)
- `--gamma_reg`: weight for the regularization term in phase 2. ($\gamma_\text{reg}$ from phase 2)
-
## Examples

### trainer matchdg with custom neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --bs=2 --model=erm --trainer=matchdg --epochs_ctr=3 --epos=6 --npath=examples/nets/resnet.py
```


### training hduva with matchdg

```shell
python main_out.py --te_d 0 1 2 --tr_d 3 7 --task=mnistcolor10 --bs=2 --model=hduva --trainer=matchdg --epochs_ctr=3 --epos=6 --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```

### training implemented neural network with matchdg
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=2 --model=erm --trainer=matchdg --epochs_ctr=3 --epos=6 --nname=alexnet
```

### trainer matchdg with mnist
```shell
python main_out.py --te_d 0 1 2 --tr_d 3 7 --task=mnistcolor10 --model=erm --trainer=matchdg --nname=conv_bn_pool_2 --epochs_ctr=2 --epos=6
```


20 changes: 20 additions & 0 deletions docs/doc_custom_nn.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,23 @@ python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=2 --model=erm --
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=2 --model=erm --trainer=matchdg --epochs_ctr=3 --epos=6 --npath=examples/nets/resnet.py
```


### model erm with custom neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=8 --model=erm --npath=examples/nets/resnet.py
```

## Larger images:

### model erm with implemented neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=8 --model=erm --nname=alexnet
```

### model dann with implemented neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=8 --model=dann --nname=alexnet
```


30 changes: 30 additions & 0 deletions docs/doc_diva.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,5 +40,35 @@ Furthermore, the user can specify the neural networks for the class and domain c
- `nname`/`npath`
- `nname_dom`/`npath_dom`


## Examples
### model diva with implemented neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=2 --model=diva --nname=alexnet --npath_dom=examples/nets/resnet.py --gamma_y=7e5 --gamma_d=1e5
```

### model diva with custom neural network
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=2 --model=diva --npath=examples/nets/resnet.py --npath_dom=examples/nets/resnet.py --gamma_y=7e5 --gamma_d=1e5
```
### generation of images
```shell
python main_out.py --te_d=0 --task=mnistcolor10 --keep_model --model=diva --nname=conv_bn_pool_2 --nname_dom=conv_bn_pool_2 --gamma_y=10e5 --gamma_d=1e5 --gen
```
## Colored version of MNIST

### leave one domain out
```shell
python main_out.py --te_d=0 --task=mnistcolor10 --keep_model --model=diva --nname=conv_bn_pool_2 --nname_dom=conv_bn_pool_2 --gamma_y=10e5 --gamma_d=1e5
```

### choose train and test
```shell
python main_out.py --te_d 0 1 2 --tr_d 3 7 --task=mnistcolor10 --model=diva --nname=conv_bn_pool_2 --nname_dom=conv_bn_pool_2 --gamma_y=7e5 --gamma_d=1e5
```




_Reference:_
DIVA: Domain Invariant Variational Autoencoders, https://arxiv.org/pdf/1905.10427.pdf, Medical Imaging with Deep Learning. PMLR, 2020.
169 changes: 0 additions & 169 deletions docs/doc_examples.md

This file was deleted.

Loading

0 comments on commit 588fff9

Please sign in to comment.