-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move fused feedforward #53166
Move fused feedforward #53166
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
… move_fused_feedforward
… move_fused_feedforward
… move_fused_feedforward
@@ -30,6 +30,8 @@ | |||
#include "paddle/phi/kernels/fusion/gpu/fused_residual_dropout_bias.h" | |||
#include "paddle/phi/kernels/layer_norm_kernel.h" | |||
|
|||
DECLARE_bool(use_fast_math); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为何需要引入use_fast_math
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是为了和 paddle/fluid/operators/fused/fused_dropout_helper.h
保持统一,这个文件我当时迁过来的时候没有加这一句,当时排查精度问题的时候加上的。但是这个好像和精度问题也没关系,由于原来 fluid 里面有我就留在那里了。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
若无实质作用,这里可以不需要再加上。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Others
PR changes
Others
Description
迁移
fused feedforward
fused feedforward grad
GPU Op Kernel 迁移到phi
因分布式相关头文件依赖:
此算子直接在fluid目录下迁移成函数式,后续分布式依赖迁移后,再将代码移动到PHI目录下。
相关 Issue:
[used AI Studio]