Skip to content

v0.0.3: GemNet-dT, SpinConv, new data: MD, Rattled, per-adsorbate trajectories, etc.

Compare
Choose a tag to compare
@abhshkdz abhshkdz released this 27 Aug 19:00
65c2d62

Breaking changes

  • Scheduler changed to step every iteration (#234). If your config specifies lr_milestones or warmup_epochs in epochs instead of steps, this will change how your scheduler behaves.
  • OCPCalculator no longer takes in a Trainer class as input. Instead, a yaml file and checkpoint path must be provided.

Major features

Other changes and improvements

  • Support for Python 3.8, PyTorch 1.8.1 (#247)
  • Early stopping for ML relaxations
  • Support for gradient clipping and maintaining an exponential moving average of parameters
  • Preprocessing support for datasets that do not have fixed atoms specified (#189)
  • Jupyter notebooks for creating LMDBs on your own data, understanding the data preprocessing pipeline (#211)
  • Release cached CUDA memory after each relaxation batch (#190)
  • Security fix in loading EvalAI npz submissions (#194)
  • Dataloader bug fix for when #GPUs > #LMDBs (#248)
  • Support for custom optimizers (#218)
  • Support for custom schedulers (#226)
  • New attributes (miller_index, shift, adsorption site, etc.) in data mapping (#219)
  • Deterministic unit tests (#228)
  • Bug fixes in released data for all tasks / splits (#197)
  • Improved logs: using logging instead of print commands, recording slurm settings
  • Better handling of job resumption on pre-emption; particularly relevant for those using slurm-based clusters. Model weights and training state are now saved to separate checkpoints, and all restarted jobs log to the same wandb plot instead of a new plot per restart.
  • Support for energy-only predictions in OCPCalculator (#243)