Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于logloss的无穷大问题 #26

Open
ttkerasoyeah opened this issue May 21, 2020 · 3 comments
Open

关于logloss的无穷大问题 #26

ttkerasoyeah opened this issue May 21, 2020 · 3 comments

Comments

@ttkerasoyeah
Copy link

首先很感谢作者的技术分享,但对您所设计的损失函数不能很好的理解,且对您的灵感源泉来自与logloss下无穷大问题这个链接也不能打开了,对于这个问题的修复不知道是否可以请您麻烦解释一下。谢谢

@ypwhs
Copy link
Owner

ypwhs commented Jun 10, 2020

如果你预测的概率是0,label是1,那么你的loss就是 -log0,这是一个无穷大的数字,kaggle为了避免这种现象,把预测的概率限制到了 1e-15,所以 loss=-ln(1e-15)=34

这个loss仍然是很大的,我们为了避免损失过大,可以限制得更多一点,比如0.005,loss=-ln(0.005)=5,这样错误的样本损失会减小7倍,而正确样本的损失是 -ln(1-0.005)~=0.005-ln(1-1e-15)~=1e-15,对 loss 影响不大,可以忽略不计。因此最终损失会减小很多,但是准确率不会变。

@ttkerasoyeah
Copy link
Author

ttkerasoyeah commented Jun 10, 2020 via email

@ttkerasoyeah
Copy link
Author

ttkerasoyeah commented Jun 10, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants