-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could you please provide the code for train and neuron selection ? #11
Comments
I think it’s in this file. https://github.com/PurduePAML/TrojanNN/blob/master/code/get_neuron_idx_from_weight.py |
Here's my understanding to fix your implementation. Hope it's correct.
|
Hello, I would like to ask, since it is to check the parameters of the previous layer. But what is written in the code is ”To select neuron in fc6 we check the weight in layer fc7“, shouldn't fc7 be the next layer of fc6? |
@JohnnyQAQ |
Thank you very much for your answer. What confuses me is that in the "Internal Neuron Selection" part in the TrojanNN paper is indeed using "the preceding layer", but I checked the code of TrojanNN and TrojanZoo and found that the implementation of this part both seems to search for the parameter W of the next layer. I'm wondering if there is something wrong with my understanding, could you please explain this part? Is there something wrong with my understanding of "the preceding layer" in the paper? |
@JohnnyQAQ I'm the author of TrojanZoo and its TrojanNN implementation is following the code of this repo. I consulted the original author Yingqi Liu and I believe it is correct to use the next layer. (Although I also doubt whether this neuron activation story makes sense... The actual running doesn't show a crazy activation gap like it is in paper. ) Here is my personal explanation about neuron activation: They want to select the neurons which are well-connected to next layer, and later optimize their trigger to maximize those activations. Therefore, it should measure the weights matrix of next layer, and use mean/sum to remove the out_features dimension, so that we get a vector whose length equals to number of neurons (output dimension of previous layer). |
@ain-soph Thank you very much for your explanation. |
we are replicating you results, but the trigger generation part always return high loss trigger pattern. Could you please provide the code for train and neuron selection ?
this is my implementation part of selecting neuron in torch:
if isinstance(net.getattr(layer_name), torch.nn.modules.Linear): #weight is (n,m)
connect_level = torch.abs(net.getattr(layer_name).weight).sum(1) # if is a matrix, then all rows is summed.
elif isinstance(net.getattr(layer_name), torch.nn.modules.Conv2d): #weight is (c_out, c_in, h, w)
connect_level = torch.abs(net.getattr(layer_name).weight).sum([1,2,3])
The text was updated successfully, but these errors were encountered: