Question about TCA np.argsort() #219
-
Hello, jindongwang, thanks for this amazing repo, I have question about the TCA implementation in python, about line 62-64
So it takes the eigenvectors with smallest eigenvalues? Isn't it supposed to be the opposite? I also try to look at the I also try to switch the sort order to descending by sorry if I am not allowed to ask here, but don't know where else I should ask |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
Good question. The code is correct. It is smallest eigenvectors that we need since we aim to minimize the objective function.
Thus, we use the smallest eigenvectors in our problem. You can refer to Page 11 of this slide: https://www.sjsu.edu/faculty/guangliang.chen/Math253S20/lec4RayleighQuotient.pdf When do we need the largest eigenvectors? See this article: https://papers.nips.cc/paper/2001/file/d5c186983b52c4551ee00f72316c6eaa-Paper.pdf. |
Beta Was this translation helpful? Give feedback.
-
Thank you for getting back to me and providing the reference, will read that! One more question if you don't mind, how does this work for new unseen data? Let's say
|
Variable | Description | Value |
---|---|---|
Xs | source data | (ns, n_feat) |
Xt1 | target data 1 | (nt1, n_feat |
Xt2 | target data 2 | (nt2, n_feat) |
ns | source data samples | 100 |
nt1 | target data 1 | 10 |
nt2 | target data 2 | 5 |
n_feat | feature dimension of source and target | 20 |
tca_dim | the dimension after PCA | 3 |
K_A | kernel matrix source and target data 1 | ((ns + nt1), (ns +nt1) |
K_C | kernel matrix source and target data 2 | ((ns + nt2), (ns +nt2) |
W_proj_A | projection matrix from K_A | ((ns + nt1), tca_dim) |
K_A
and K_C
refers to approach A
and C
in the code above, now I want to use W_proj_A for Xt2
, I try to pair it with Xs with ns sample size, but cannot cause the kernel matrix isn't same
Approach | K size | W_proj_A size |
---|---|---|
A | (110, 110) | (110, 3) |
C | (105, 105) | (110, 3) (??) |
The approach A maps K_A
into latent space of 3 dimension, but as can be seen I cannot use W_proj_A
on K_C
due to different size. What should be the kernel matrix or projection matrix in C
?
Beta Was this translation helpful? Give feedback.
-
Best regards I am trying to make a comparison of the incremental versions of dimensionality reduction methods. Could someone tell me where I can find the code for Incremental Locally Linear Embedding, or Incremental Multidimensional Scaling or Incremental Laplacian EigenMaps ? Thank you. |
Beta Was this translation helpful? Give feedback.
Good question. The code is correct. It is smallest eigenvectors that we need since we aim to minimize the objective function.
The function, as you know it, is solved by Rayleigh quotient:
Thus, we use the smallest eigenvectors in our problem. You can refer to Page 11 of this slide: https://www.sjsu.edu/faculty/guangliang.chen/Math253S20/lec4RayleighQuotient.pdf
When do we need the largest eigenvectors? See this article: https://papers.nips.cc/paper/2001/file/d5c186983b52c4551ee00f72316c6eaa-Paper.pdf.