Skip to content

Commit

Permalink
complete doc for MIRO
Browse files Browse the repository at this point in the history
  • Loading branch information
smilesun committed Sep 24, 2024
1 parent 2604536 commit c56b55a
Showing 1 changed file with 19 additions and 0 deletions.
19 changes: 19 additions & 0 deletions docs/doc_miro.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,22 @@ $$\sum_i <|X_i-m(Y_i)|_{|\Sigma^{-1}|(Y_i)} + \log det(\Sigma(Y_i))>_{p(Y|X)}$$


## MIRO

MIRO try to match the pre-trained model's features layer by layer to the target neural network we want to train for domain invariance in terms of mutual information. They use a constant identity encoder on feature from target neural network, then a population variance $\Sigma$ (forced to be diagonal).

Let $z$ denote the intermediate features of each layer, let $f_0$ be the pre-trained model, $f$ be the target neural network. Let $x$ be the input data.

$$z_f=f(x)$$

$$z_{f_0}=f^{(0)}(x)$$

the lower bound for Mutual information for instance $i$ is


$$\log|\Sigma| + ||z^{(i)}_{f_0}-id(z^{i})||_{\Sigma}^{-1}$$

where $id$ is the mean map

For diagonal $\Sigma$, determinant is simply multiplication of all diagonal values,
$\log|\Sigma|=\sum_{k} \log \sigma_k + ||z_k^{(i)}_{f_0}-z_k^{i}||{\sigma_K}^{-1}$

0 comments on commit c56b55a

Please sign in to comment.