We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我在models的rnnsearch.py中加入了使用预训练词向量的代码,从此每5000个step验证完之后都会重新加载一次预训练的词向量,这样我的embedding是不是就不会微调了啊,感觉不太对。 期待您的答复,谢谢~
The text was updated successfully, but these errors were encountered:
预训练词向量一般通过initializer添加,只在初始化的时候赋值。当存在保存的checkpoint时,初始化的参数会被checkpoint中的参数覆盖。在验证时会恢复之前保存的checkpoint,按说不会加载之前的词向量。
Sorry, something went wrong.
预训练词向量一般通过initializer添加,只在初始化的时候赋值。当存在保存的checkpoint时,初始化的参数会被checkpoint中的参数覆盖。在验证时会恢复之前保存的checkpoint,按说不会加载之前的词向量。 请问你们做过加入预训练词向量的实验吗,我加入以后效果还不如不加了,疑惑
No branches or pull requests
我在models的rnnsearch.py中加入了使用预训练词向量的代码,从此每5000个step验证完之后都会重新加载一次预训练的词向量,这样我的embedding是不是就不会微调了啊,感觉不太对。
期待您的答复,谢谢~
The text was updated successfully, but these errors were encountered: