-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EquiformerV2 fails rotational equivariance test #823
EquiformerV2 fails rotational equivariance test #823
Conversation
9eb7723
to
20f662c
Compare
95d8a7e
to
282e96b
Compare
282e96b
to
5af7283
Compare
Hi @curtischong, thanks for flagging and adding a test for this! Can you make sure to set the model to evaluation mode before calling forward |
wow! I can't believe that the other tests don't explicitly put the models in eval mode using eval()! what exactly does eval() do? Does it set it to 64bit precision (sorry don't have time to read the code rn)? I made the changes and this was the output. Seems like the energy predictions are good enough but not the forces! The forces assert statement passes if I make decimal = 2 in:
|
Note. when I changed |
Hi Curtis, if you set |
I added the flag and it does increase it! However, I'm still concerned since for forces, we can only get 4 decimal points of precision. (setting decimal=5 fails the test). I remember attending a lecture by Albert Musaelian and he mentioned that we should always use 64 bits of precision, especially for MD (since errors compound). However, 4 decimal points isn't even 32 bits of precision! Do you know what else is causing this error? |
This PR has been marked as stale because it has been open for 30 days with no activity. |
This PR has been marked as stale because it has been open for 30 days with no activity. |
After trying to make my own equivariant nn framework, I came to realize that 4 decimal points of equivariance error is quite impressive. Especially because with a deep model like Equiformerv2, there's so much error that can build up throughout the network (losing higher order irreps and general rounding errors). So I'm closing this PR |
I did the test on test_equiformer_v2_deprecated.py rather than on test_equiformer_v2.py because it more closely followed the format that I saw in gemnet. Regardless, the test should still verify whether or not the model is equivariant.
I ran the equivariance tests on test_gemnet.py and test_gemnet_oc.py just to see if that equivariance test at least passes on those tests - which it does. Unfortunately, I couldn't get the entire test suite working on all of the tests due to invalid snapshots errors.
I think this equivariance test should explain the results of the model on these benchmarks: https://huggingface.co/spaces/atomind/mlip-arena
Here is the terminal output of
pytest test_equiformer_v2_deprecated.py
: