-
This is a follow-up on this question and this answer. This time, let's take Torch, and the same simple circuit. It applies two gates to one qubit and evaluates the expectation value of PauliZ. When optimizing with Torch, the expectation value runs aways to -Inf. I think my issue is I cannot correctly select or filter which gates are parametrized (according to Torch). I have tried going around This may be an issue with my usage of Torch, but I was wondering if someone knew a quick way to fix this. Feel free to ignore if this is outside the scope of this forum. # %%
import quimb as qu
import quimb.tensor as qtn
import numpy as np
import torch
# %%
def convert(x, requires_grad=False):
return torch.tensor(x, dtype=torch.complex128, requires_grad=requires_grad)
quimb_obs = convert(qu.pauli("Z"), requires_grad=False)
where = 0
N = 1
# %%
def circuit(params):
circ = qtn.Circuit(N)
# hamadard on one of the qubits
circ.apply_gate("RX", params[0], 0, parametrize=True)
circ.apply_gate("RY", params[1], 0, parametrize=True)
return circ
def loss_fn(ket):
bra = ket.H
ket = ket.gate(quimb_obs, 0)
expec = bra & ket
return np.real(expec.contract(all))
params = np.array([0.011, 0.012])
circ = circuit(params)
psi = circ.psi
loss_fn(psi)
# %%
for t in psi[0]:
t.set_params(convert(t.get_params(), isinstance(t, qu.tensor.PTensor)))
# psi.apply_to_arrays(convert)
# %%
class TNModel(torch.nn.Module):
def __init__(self, tn):
super().__init__()
# extract the raw arrays and a skeleton of the TN
params, self.skeleton = qtn.pack(tn)
# n.b. you might want to do extra processing here to e.g. store each
# parameter as a reshaped matrix (from left_inds -> right_inds), for
# some optimizers, and for some torch parametrizations
self.torch_params = torch.nn.ParameterDict(
{
# torch requires strings as keys
str(i): torch.nn.Parameter(initial, requires_grad=initial.requires_grad)
for i, initial in params.items()
}
)
def forward(self):
# convert back to original int key format
params = {int(i): p for i, p in self.torch_params.items()}
# reconstruct the TN with the new parameters
psi = qtn.unpack(params, self.skeleton)
# isometrize and then return the energy
return loss_fn(psi)
# %%
model = TNModel(psi)
model()
#%%
optimizer = torch.optim.Adagrad(model.parameters())
its = 100
for _ in range(its):
optimizer.zero_grad()
loss = model()
loss.backward()
optimizer.step() |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 9 replies
-
Maybe doing something like the following would work: constant_tn, variable_tn = tn.partition(['RX', 'RY']) and then in you only pack/unpack the variable part, and call psi = constant_tn | variable_tn in your loss function? Adding tag filtering to the (by the way I edited your Q just to add the ```python syntax highlighting, hope that's okay.) |
Beta Was this translation helpful? Give feedback.
I haven't time to run through your example properly, but:
requires_grad
yourself, that is all handled by setting attributes as parameters.requires_grad
, then simply contract the tensor network, and callx.backward()
yourself, no need fortorch.nn.Module
or any abstractions at all: