Replies: 1 comment 3 replies
-
Hi @khidanov, sorry for the slow reply! Yes basically the problem is that the simplification process is data/parameter dependent and so not currently cached. When A partial improvement is to use less simplify methods e.g. Longer term it is probably possible to trace through the simplifications - as long as the different parameters don't yield different sparsity or low-rank structures - so that the logic doesn't need to be performed each step and even something like |
Beta Was this translation helpful? Give feedback.
-
Hi! I have a question regarding reusing the contraction path for reparameterized circuits. In my problem I need to calculate expectation values/amplitudes multiple times for the same parameterized circuit but with different sets of parameter values. Since the structure of the tensor network remains the same for these different sets of parameter values, I would think that performing local simplifications and finding the optimal contraction path could be done for one set of parameter values, and reusing the obtained contraction path for other sets of parameter values should speed up the contraction calculation. However, I do not see this to be the case, and I am not sure why.
Here is an illustrative example for a parameterized circuit example from the Quimb tutorials:
Here I define some observable which corresponds to some Pauli string:
Here is the timing estimates for different things:
The first timing output is for rehearsing the contraction. The second one is for contracting the "rehearsed" TN. The third is for contracting the TN using the obtained contraction path. And the fourth is for contracting a reparameterized TN using the obtained contraction path. On my machine these timings are given as t1≈42ms, t2≈2.6ms, t3≈11ms, t4≈122ms, respectively.
First of all, I am not entirely sure why t2 and t3 are different. My guess is that t3 involves an extra simplification step that has been already done for t2. Second and most importantly, t4 is drastically different from either t2 or t3. However, my understanding is that in principle t4 operation should be able to be performed as fast as t2. That would dramatically speed up my calculations.
Am I missing something here? I am fairly new to Quimb and to the whole world of tensor networks, so I would appreciate if you could provide any input/advice. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions