-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix pytorch
gradients
#1450
Fix pytorch
gradients
#1450
Conversation
I found a potential problem when you cast the parameters of parametrized gates: qibo/src/qibo/backends/pytorch.py Line 32 in 1b6eb8a
here you cast to self.dtype which is usually torch.complex128 but it should actually be torch.float32/64 since this is a parameter. I don't know whether this is the origin of the problem with the gradients though.
EDIT: nevermind, you necessarily have to cast to complex otherwise when you build the matrix elements pytorch complains about mismatching dtypes... |
Actually this may be part of the problem, i am rewriting in a better way the whole casting function differentiating the matrix_dtype from the parameters_dtype |
I think that now everything has been fixed. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1450 +/- ##
==========================================
- Coverage 99.93% 99.93% -0.01%
==========================================
Files 81 81
Lines 11820 11790 -30
==========================================
- Hits 11812 11782 -30
Misses 8 8
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Simone-Bordoni, looks good. The only two things that I am not a fan of are:
- The new
requires_grad
argument added tocast
. I would remove that and, when possible, move the functions, where it is needed, inside the backend as methods (like the_curve_fit
case discussed above) . For the cases when it is not possible I would create specificbackend.cast_parameter()
# numpy
def cast_parameter(self, x, **kwargs):
return self.cast(x, **kwargs)
# torch
def cast_parameter(self, x, **kwargs):
return self.cast(x, **kwargs).requires_grad_()
- The fact that the decomposition needs the backend now. Ideally, I don't think this should be the case, but I don't know in practice if it's possible to avoid.
Regarding the decomposition, as it is now you need the backend.
So I agree on adding a cast parameter function to all backends but this should not be related to the standard cast() |
1 similar comment
Regarding the decomposition, as it is now you need the backend.
So I agree on adding a cast parameter function to all backends but this should not be related to the standard cast() |
Actually cast_parameter() was made to cast a single parameter for matrix_parametrized(). |
Fix #1449
Checklist: