Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

split examples into algo doc #874

Merged
merged 9 commits into from
Sep 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions docs/docDIAL.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,3 +72,9 @@ This procedure yields to the following availability of hyperparameter:
- `--dial_epsilon`: pixel wise threshold to perturb images
- `--gamma_reg`: ? ($\epsilon$ in the paper)
- `--lr`: learning rate ($\alpha$ in the paper)

# Examples

```
python main_out.py --te_d=0 --task=mnistcolor10 --model=erm --trainer=dial --nname=conv_bn_pool_2
```
5 changes: 5 additions & 0 deletions docs/docFishr.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,15 @@ For more details, see the reference below or the domainlab code.



# Examples
```
python main_out.py --te_d=0 --task=mini_vlcs --model=erm --trainer=fishr --nname=alexnet --bs=2 --nocu
```



_Reference:_
Rame, Alexandre, Corentin Dancette, and Matthieu Cord. "Fishr:
Invariant gradient variances for out-of-distribution generalization."
International Conference on Machine Learning. PMLR, 2022.

12 changes: 12 additions & 0 deletions docs/docHDUVA.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,18 @@ Alternatively, one could use an existing neural network in DomainLab using `nnam

## Hyperparameter for warmup
Finally, the number of epochs for hyper-parameter warm-up can be specified via the argument `warmup`.
## Examples
### use hduva on color mnist, train on 2 domains
```shell
python main_out.py --tr_d 0 1 2 --te_d 3 --bs=2 --task=mnistcolor10 --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```

### hduva is domain-unsupervised, so it works also with a single domain
```shell
python main_out.py --tr_d 0 --te_d 3 4 --bs=2 --task=mnistcolor10 --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```



Please cite our paper if you find it useful!
```text
Expand Down
4 changes: 4 additions & 0 deletions docs/docIRM.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,8 @@ where $\lambda$ is a hyperparameter that controls the trade-off between the empi
In practice, one could simply divide one mini-batch into two subsets, let $i$ and $j$ to index these two subsets, multiply subset $i$ and subset $j$ forms an unbiased estimation of the L2 norm of gradient.
In detail: the squared gradient norm via inner product between $\nabla_{w|w=1} \ell(w \circ \Phi(X^{(d, i)}), Y^{(d, i)})$ of dimension dim(Grad) with $\nabla_{w|w=1} \ell(w \circ \Phi(X^{(d, j)}), Y^{(d, j)})$ of dimension dim(Grad) For more details, see section 3.2 and Appendix D of : Arjovsky et al., “Invariant Risk Minimization.”

# Examples
```shell
python main_out.py --te_d=0 --task=mnistcolor10 --model=erm --trainer=irm --nname=conv_bn_pool_2

```
10 changes: 0 additions & 10 deletions docs/doc_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,6 @@ python main_out.py --te_d 0 1 --tr_d 3 5 --task=mnistcolor10 --debug --bs=2 --mo
python main_out.py --te_d=0 --task=mnistcolor10 --keep_model --model=diva --nname=conv_bn_pool_2 --nname_dom=conv_bn_pool_2 --gamma_y=10e5 --gamma_d=1e5 --gen
```

### use hduva on color mnist, train on 2 domains
```shell
python main_out.py --tr_d 0 1 2 --te_d 3 --bs=2 --task=mnistcolor10 --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```

### hduva is domain-unsupervised, so it works also with a single domain
```shell
python main_out.py --tr_d 0 --te_d 3 4 --bs=2 --task=mnistcolor10 --model=hduva --nname=conv_bn_pool_2 --gamma_y=7e5 --nname_encoder_x2topic_h=conv_bn_pool_2 --nname_encoder_sandwich_x2h4zd=conv_bn_pool_2
```


## Larger images:

Expand Down
24 changes: 17 additions & 7 deletions scripts/ci_run_examples.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,24 @@ set -e # exit upon first error
# >> append content
# > erase original content

# echo "#!/bin/bash -x -v" > sh_temp_example.sh
sed -n '/```shell/,/```/ p' docs/doc_examples.md | sed '/^```/ d' >> ./sh_temp_example.sh
split -l 5 sh_temp_example.sh sh_example_split
for file in sh_example_split*;
do (echo "#!/bin/bash -x -v" > "$file"_exe && cat "$file" >> "$file"_exe && bash -x -v "$file"_exe && rm -r zoutput);

files=("docs/docDIAL.md" "docs/docIRM.md" "docs/doc_examples.md" "docs/docHDUVA.md")

for file in "${files[@]}"
do
echo "Processing $file"
# no need to remove sh_temp_algo.sh since the following line overwrite it each time
echo "#!/bin/bash -x -v" > sh_temp_algo.sh
# remove code marker ```
# we use >> here to append to keep the header #!/bin/bash -x -v
sed -n '/```shell/,/```/ p' $file | sed '/^```/ d' >> ./sh_temp_algo.sh
cat sh_temp_algo.sh
bash -x -v -e sh_temp_algo.sh
# Add your commands to process each file here
echo "finished with $file"
done
# bash -x -v -e sh_temp_example.sh
echo "general examples done"



echo "#!/bin/bash -x -v" > sh_temp_mnist.sh
sed -n '/```shell/,/```/ p' docs/doc_MNIST_classification.md | sed '/^```/ d' >> ./sh_temp_mnist.sh
Expand Down
Loading