You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It was harder than I firstly thought. Now available on cu_euqi branch.
Nearly x3 times faster in training, when the channel is consistent across the L value. If # of channels is different for each L value, like SevenNet-0, I found it becomes slightly slower.
Slightly (~50%?) faster for MD even with an inconsistent number of channels.
LAMMPS is not available. Due to this issue: Questions about TorchScript support NVIDIA/cuEquivariance#30
It is a more unclear than the previous limitation. PyTorch itself may deprecated the torchscript than we have to re-implement LAMMPS part when newer replacement, torch.export came out. I'm not sure whether they gonna support torchscript for cuEquivariance.
Usage:
Installation:
git clone https://github.com/MDIL-SNU/SevenNet.git
cd SevenNet
git checkout cu_equi
pip install .
pip install cuequivariance-torch
pip install cuequivariance-ops-torch-cu12 # choose between cu12 or cu11 based on CUDA version
How hard would it be to add a CUequivariance back end to SevenNet?
The text was updated successfully, but these errors were encountered: