We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,我想请问一下为什么main.py中的imagenet的nclass是设的100?以及如果按照run_imagenet.sh的示例,对resnet18仅将权重量化为2bit激活值不作量化,最终多少的精度算是正常?
The text was updated successfully, but these errors were encountered:
啊抱歉,是我写错了,imagenet nclass应该是1000; resnet18 2bit我没有试过,但是8bit的resnet LQ-net retrain之后精度应该和baseline(full precision)比下降0.5%左右 根据LQ-net paper,2 bit weights &32 bit activation accuray =68.0%。不过我没有用自己的写的跑过
Sorry, something went wrong.
请问self.v是指的论文中的basis吗?也就是kbit的话self.v就有k个值?我从https://github.com/pyjhzwh/LQ-net-pytorch/blob/master/lqnet.py#L30这里来看,对第i层的self.v似乎只有1个值,尽管我的b设的是2也就是2bits。
LQ-net paper中的basis是[v1,v2,...] (v1 < v2 <...), 而我写的code的basis实际上是[self.v, self.v*2, self.v*2^2, ...],所以不太一样,flexibility小一些 ps. 我code里用了 self.Wmean[i] 来代表bias,假设floating-point weights = quantized weights * scale + bias 如果不需要 bias来表示quantized weights的话,把和- self.Wmean[i]相关的删去就可以
basis
floating-point weights = quantized weights * scale + bias
- self.Wmean[i]
No branches or pull requests
您好,我想请问一下为什么main.py中的imagenet的nclass是设的100?以及如果按照run_imagenet.sh的示例,对resnet18仅将权重量化为2bit激活值不作量化,最终多少的精度算是正常?
The text was updated successfully, but these errors were encountered: