Replies: 3 comments 1 reply
-
Hi @rafaeleb, yes, most operations in quimb dispatch on whatever type of arrays are in the tensors, so you just need to get them in the tensor networks. From that example, the following lines would be applicable directly: import torch
# e.g.
mps.apply_to_arrays(lambda x: torch.tensor(x, dtype=torch.float32))
mpo.apply_to_arrays(lambda x: torch.tensor(x, dtype=torch.float32)) Then you'd just need to specify whatever loss function for you MPS. Or you could call For optimizing MPS there are a few options:
|
Beta Was this translation helpful? Give feedback.
-
Hello Johnnie, Thank you for the reply, it's nice to know it's supposed to work, so the issue is probably in the coding somewhere. My goal is to build some type of alternate optimization procedure, between the MPS itself (pretty much as done in the Heisenberg optimization example) and some other parameters which right now I am trying to optimize via a toch neural net. Since my loss function is somewhat complex, I have been having problems in the backpropagation, but it seems it's mostly due to numpy operations along the way, which I am trying to avoid by making everything torch. I will try to elaborate on the problems a bit more as I work on it, so I hope it's ok to ask a few more questions here! BR Rafael |
Beta Was this translation helpful? Give feedback.
-
Thanks Johnnie! I am now working on making everything jax-compatible, which seems like the best choice at this moment. I'll open a different discussion if needed. |
Beta Was this translation helpful? Give feedback.
-
Hello all,
I am looking to run some optimization on MPSs created in Quimb, but using a torch backend. I see that there is an example in the documentation for the optimization of a TN of using Quimb within torch. But is there an analogous procedure for the MPS? Is there a way to easily create a random MPS that is torch-compatible?
Thanks!
Rafael
Beta Was this translation helpful? Give feedback.
All reactions