Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the inference code (need model.eval()) #8

Open
BaofengZan opened this issue May 7, 2024 · 4 comments
Open

Questions about the inference code (need model.eval()) #8

BaofengZan opened this issue May 7, 2024 · 4 comments

Comments

@BaofengZan
Copy link

I use the ‘Airline_demo.py’ to test the images and the original file can get normal results. But when I add Premodel.eval(), the result is 0(other parameters remain the same).

without model.eval()
image

with model.eval()
image

@Lx017
Copy link
Contributor

Lx017 commented May 7, 2024

hmmmm, I had not tested this though, this may affect some normalization but I am not sure. if no .eval() works then I recommend to just leave it so?

@BaofengZan
Copy link
Author

BaofengZan commented May 7, 2024

When I export onnx using torch.onnx.export, the bn layer exports in eval mode by default, If the onnxruntime inference is used, the result is incorrect. But when I export the bn layer in TRAINING mode, when converting to tensorrt, it will prompts that the BN layer is in TRAINING mode, which makes it impossible to convert. So I think I still need to check this issue.

Error when converting tensorrt。Official Answers: NVIDIA/TensorRT#3457 (comment)
image

@Lx017
Copy link
Contributor

Lx017 commented May 7, 2024

yeah sorry, that is indeed something we missed...

@BaofengZan
Copy link
Author

Thank you, I look forward to you being able to solve this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants