-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Selective Weight Vector for Loss Func #39
Comments
Hi, you're right that the 2nd output is missing from the overloaded @max (this output is not differentiable so I didn't think of it). Anyway you can work around it, by creating a non-differentiable layer based on Matlab's @max with 2 outputs:
(numInputDer is the number of input derivatives, which is 0 for non-differentiable functions) |
it's very cool solution!.. function [y,dzdw ]= z_maxx(fea,w,scr,lbl,varargin) [~ dzdy] = vl_argparsepos(struct(), varargin) ; end then I called it by using : Also, I want to inform you about that defining derivatives like y = {dzdf,dzdw} can't work in eval mode. Also, vl_nnaxpy is in the same format, so I couldn't compile it. When I define derivatives as an output argument like [y,dzdw ] it's worked. I also mentioned this porblem in #31. I have a one more question Many thanks !! |
Hi @jotaf98,
how can we get max score indices in logits? I want to calculate a verification loss for a classified person.
For example, let scr is logits of network , for a person, fea_vect is multiplied with corresponding vector W(maxInd , : )
[~,maxInd] = max( scr , [ ] , 3);
loss_2 = (tanh(W(maxInd , : ) * fea_vect) - lbl).^2 where W = Param('value', randn(numPers,feaVectLength))
But I've got an error using max :
Error using Layer/max
Too many output arguments.
Also is W trainable over corresponding vectors ?
The text was updated successfully, but these errors were encountered: