Skip to content

Commit

Permalink
.
Browse files Browse the repository at this point in the history
  • Loading branch information
smilesun committed Sep 17, 2024
1 parent 50ce5a9 commit c46205b
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 6 deletions.
3 changes: 1 addition & 2 deletions docs/docDIAL.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,15 +73,14 @@ This procedure yields to the following availability of hyperparameter:
- `--gamma_reg`: ? ($\epsilon$ in the paper)
- `--lr`: learning rate ($\alpha$ in the paper)

# Examples
## Examples

```
python main_out.py --te_d=0 --task=mnistcolor10 --model=erm --trainer=dial --nname=conv_bn_pool_2
```



## Adversarial images training
```shell
python main_out.py --te_d=0 --task=mnistcolor10 --keep_model --model=erm --trainer=dial --nname=conv_bn_pool_2
```
Expand Down
2 changes: 1 addition & 1 deletion docs/docFishr.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ For more details, see the reference below or the domainlab code.



# Examples
## Examples
```
python main_out.py --te_d=0 --task=mini_vlcs --model=erm --trainer=fishr --nname=alexnet --bs=2 --nocu
```
Expand Down
2 changes: 1 addition & 1 deletion docs/docIRM.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ where $\lambda$ is a hyperparameter that controls the trade-off between the empi
In practice, one could simply divide one mini-batch into two subsets, let $i$ and $j$ to index these two subsets, multiply subset $i$ and subset $j$ forms an unbiased estimation of the L2 norm of gradient.
In detail: the squared gradient norm via inner product between $\nabla_{w|w=1} \ell(w \circ \Phi(X^{(d, i)}), Y^{(d, i)})$ of dimension dim(Grad) with $\nabla_{w|w=1} \ell(w \circ \Phi(X^{(d, j)}), Y^{(d, j)})$ of dimension dim(Grad) For more details, see section 3.2 and Appendix D of : Arjovsky et al., “Invariant Risk Minimization.”

# Examples
## Examples
```shell
python main_out.py --te_d=0 --task=mnistcolor10 --model=erm --trainer=irm --nname=conv_bn_pool_2

Expand Down
3 changes: 1 addition & 2 deletions docs/doc_mldg.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
Li, Da, et al. "Learning to generalize: Meta-learning for domain generalization." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018.


# Examples
## Meta Learning Domain Generalization
## Examples
```shell
python main_out.py --te_d=caltech --task=mini_vlcs --debug --bs=8 --model=erm --trainer=mldg --nname=alexnet
```

0 comments on commit c46205b

Please sign in to comment.