TorchOpt v0.5.0
[0.5.0] - 2022-09-05
Added
- Implement AdamW optimizer with masking by @Benjamin-eecs and @XuehaiPan in #44.
- Add half float support for accelerated OPs by @XuehaiPan in #67.
- Add MAML example with TorchRL integration by @vmoens and @Benjamin-eecs in #12.
- Add optional argument
params
to update function in gradient transformations by @XuehaiPan in #65. - Add option
weight_decay
option to optimizers by @XuehaiPan in #65. - Add option
maximize
option to optimizers by @XuehaiPan in #64. - Refactor tests using
pytest.mark.parametrize
and enabling parallel testing by @XuehaiPan and @Benjamin-eecs in #55. - Add maml-omniglot few-shot classification example using functorch.vmap by @Benjamin-eecs in #39.
- Add parallel training on one GPU using functorch.vmap example by @Benjamin-eecs in #32.
- Add question/help/support issue template by @Benjamin-eecs in #43.
Changed
- Align argument names with PyTorch by @XuehaiPan in #65.
- Replace JAX PyTrees with OpTree by @XuehaiPan in #62.
- Update image link in README to support PyPI rendering by @Benjamin-eecs in #56.
Fixed
- Fix RMSProp optimizer by @XuehaiPan in #55.
- Fix momentum tracing by @XuehaiPan in #58.
- Fix CUDA build for accelerated OP by @XuehaiPan in #53.
- Fix gamma error in MAML-RL implementation by @Benjamin-eecs #47.