You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In models.py line 104 and line 118, I saw two missing nn.LeakyReLU after convolution blocks. To my knowledge, the conventions write it that each convolution block (or linear block) should be followed by an activation function, a leaky relu in this case. Besides, I did not see any additional notes in the original paper stating this architecture.
Point me in the right direction if I was wrong. Thanks!
The text was updated successfully, but these errors were encountered:
Thanks for the code!
In
models.py
line 104 and line 118, I saw two missingnn.LeakyReLU
after convolution blocks. To my knowledge, the conventions write it that each convolution block (or linear block) should be followed by an activation function, a leaky relu in this case. Besides, I did not see any additional notes in the original paper stating this architecture.Point me in the right direction if I was wrong. Thanks!
The text was updated successfully, but these errors were encountered: