Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问为什么imagenet的nclass设为100 #2

Open
ydc123 opened this issue Apr 13, 2020 · 3 comments
Open

请问为什么imagenet的nclass设为100 #2

ydc123 opened this issue Apr 13, 2020 · 3 comments

Comments

@ydc123
Copy link

ydc123 commented Apr 13, 2020

您好,我想请问一下为什么main.py中的imagenet的nclass是设的100?以及如果按照run_imagenet.sh的示例,对resnet18仅将权重量化为2bit激活值不作量化,最终多少的精度算是正常?

@pyjhzwh
Copy link
Owner

pyjhzwh commented Apr 13, 2020

啊抱歉,是我写错了,imagenet nclass应该是1000;
resnet18 2bit我没有试过,但是8bit的resnet LQ-net retrain之后精度应该和baseline(full precision)比下降0.5%左右
根据LQ-net paper,2 bit weights &32 bit activation accuray =68.0%。不过我没有用自己的写的跑过

@ydc123
Copy link
Author

ydc123 commented Apr 14, 2020

@pyjhzwh
Copy link
Owner

pyjhzwh commented Apr 14, 2020

LQ-net paper中的basis是[v1,v2,...] (v1 < v2 <...), 而我写的code的basis实际上是[self.v, self.v*2, self.v*2^2, ...],所以不太一样,flexibility小一些
ps. 我code里用了 self.Wmean[i] 来代表bias,假设floating-point weights = quantized weights * scale + bias 如果不需要 bias来表示quantized weights的话,把和- self.Wmean[i]相关的删去就可以

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants