Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you please provide the code for train and neuron selection ? #11

Open
CHR-ray opened this issue Feb 25, 2022 · 7 comments
Open

Could you please provide the code for train and neuron selection ? #11

CHR-ray opened this issue Feb 25, 2022 · 7 comments

Comments

@CHR-ray
Copy link

CHR-ray commented Feb 25, 2022

we are replicating you results, but the trigger generation part always return high loss trigger pattern. Could you please provide the code for train and neuron selection ?

this is my implementation part of selecting neuron in torch:

if isinstance(net.getattr(layer_name), torch.nn.modules.Linear): #weight is (n,m)
connect_level = torch.abs(net.getattr(layer_name).weight).sum(1) # if is a matrix, then all rows is summed.
elif isinstance(net.getattr(layer_name), torch.nn.modules.Conv2d): #weight is (c_out, c_in, h, w)
connect_level = torch.abs(net.getattr(layer_name).weight).sum([1,2,3])

@ain-soph
Copy link

ain-soph commented Mar 3, 2022

@ain-soph
Copy link

ain-soph commented Mar 4, 2022

Here's my understanding to fix your implementation. Hope it's correct.

  1. you shall calculate the weight of the next layer, rather than the selected layer.
  2. You are summing the wrong dimension. For the next layer, you shall sum over out_channelsorout_features, so that you get the tensor with same length as number of neurons from previous layer output.

@JohnnyQAQ
Copy link

Here's my understanding to fix your implementation. Hope it's correct.

  1. you shall calculate the weight of the preceding layer, rather than the selected layer.
  2. You are summing the wrong dimension. For the preceding layer, you shall sum over out_channels/out_features, so that you get the tensor with same length as number of neurons from previous layer output.

Hello, I would like to ask, since it is to check the parameters of the previous layer. But what is written in the code is ”To select neuron in fc6 we check the weight in layer fc7“, shouldn't fc7 be the next layer of fc6?

@ain-soph
Copy link

ain-soph commented Apr 2, 2022

@JohnnyQAQ
Thanks for correcting my typo. Not preceding layer but the next layer. To select fc6 neurons, we need to calculate fc7 weights.

@JohnnyQAQ
Copy link

@JohnnyQAQ Thanks for correcting my typo. Not preceding layer but the next layer. To select fc6 neurons, we need to calculate fc7 weights.

Thank you very much for your answer. What confuses me is that in the "Internal Neuron Selection" part in the TrojanNN paper is indeed using "the preceding layer", but I checked the code of TrojanNN and TrojanZoo and found that the implementation of this part both seems to search for the parameter W of the next layer. I'm wondering if there is something wrong with my understanding, could you please explain this part? Is there something wrong with my understanding of "the preceding layer" in the paper?

@ain-soph
Copy link

ain-soph commented Apr 3, 2022

@JohnnyQAQ I'm the author of TrojanZoo and its TrojanNN implementation is following the code of this repo.

I consulted the original author Yingqi Liu and I believe it is correct to use the next layer. (Although I also doubt whether this neuron activation story makes sense... The actual running doesn't show a crazy activation gap like it is in paper. )

Here is my personal explanation about neuron activation: They want to select the neurons which are well-connected to next layer, and later optimize their trigger to maximize those activations. Therefore, it should measure the weights matrix of next layer, and use mean/sum to remove the out_features dimension, so that we get a vector whose length equals to number of neurons (output dimension of previous layer).

@JohnnyQAQ
Copy link

@ain-soph Thank you very much for your explanation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants