-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
simplify flip_gradient #27
Comments
Thanks for pointing this out! Yes, this is much cleaner and easier to understand then the old method. I will try to find time to update and test the code with this soon. |
how to use? it returns two outputs, |
I am wondering why the second output of the "grad" function is "None" (fourth line). Can someone explain it to me? |
I know this is old but it might be helpful for newcomers as well.
According to tf.custom_gradient doc:
Simply put, y1 represents the tensor computed in the forward pass (i.e. identity) whereas y2 the downstream (custom) gradient in the backward pass.
Referring to the same doc page as above:
This means that by providing l as required parameter to the flip_grad_layer function the gradient will be computed for the latter variable, which is not intended and discarded by returning None. |
Since tensorflow 1.7 there is a new way to redefine the gradient.
The text was updated successfully, but these errors were encountered: