All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Enable CI workflow to build CXX/CUDA extension for Python 3.12 by @XuehaiPan in #216.
- Refactor the raw import statement in
setup.py
withimportlib
utilities by @XuehaiPan in #214.
- Drop PyTorch 1.x support by @XuehaiPan in #215.
0.7.3 - 2023-11-10
- Set minimal C++ standard to C++17 by @XuehaiPan in #195.
- Fix
optree
compatibility for multi-tree-map withNone
values by @XuehaiPan in #195.
0.7.2 - 2023-08-18
- Implement
Adadelta
,RAdam
,Adamax
optimizer by @JieRen98 and @Benjamin-eecs in #171.
0.7.1 - 2023-05-12
- Enable CI workflow to build CXX/CUDA extension for Python 3.11 by @XuehaiPan in #152.
- Implement AdaGrad optimizer and exponential learning rate decay schedule by @Benjamin-eecs and @XuehaiPan in #80.
- Enable tests on Windows by @XuehaiPan in #140.
- Add
ruff
andflake8
plugins integration by @XuehaiPan in #138 and #139. - Add more documentation on implicit differentiation by @Benjamin-eecs and @XuehaiPan in #143.
- Fix overloaded annotations of
extract_state_dict
by @StefanoWoerner in #162. - Fix transpose empty iterable with
zip(*nested)
in transformations by @XuehaiPan in #145.
- Drop Python 3.7 support by @XuehaiPan in #136.
0.7.0 - 2023-02-16
- Update Sphinx documentation by @XuehaiPan and @Benjamin-eecs and @waterhorse1 and @JieRen98 in #127.
- Add object-oriented modules support for zero-order differentiation by @XuehaiPan in #125.
- Use postponed evaluation of annotations and update doctring style by @XuehaiPan in #135.
- Rewrite setup CUDA Toolkit logic by @XuehaiPan in #133.
- Update tests and fix corresponding bugs by @XuehaiPan and @Benjamin-eecs and @JieRen98 in #78.
- Fix memory leak in implicit MAML omniglot few-shot classification example with OOP APIs by @XuehaiPan in #113.
0.6.0 - 2022-12-07
- Add unroll pragma for CUDA OPs by @JieRen98 and @XuehaiPan in #112.
- Add Python implementation of accelerated OP and pure-Python wheels by @XuehaiPan in #67.
- Add
nan_to_num
hook and gradient transformation by @XuehaiPan in #119. - Add matrix inversion linear solver with neumann series approximation by @Benjamin-eecs and @XuehaiPan in #98.
- Add if condition of number of threads for CPU OPs by @JieRen98 in #105.
- Add implicit MAML omniglot few-shot classification example with OOP APIs by @XuehaiPan in #107.
- Add implicit MAML omniglot few-shot classification example by @Benjamin-eecs in #48.
- Add object-oriented modules support for implicit meta-gradient by @XuehaiPan in #101.
- Bump PyTorch version to 1.13.0 by @XuehaiPan in #104.
- Add zero-order gradient estimation by @JieRen98 in #93.
- Add RPC-based distributed training support and add distributed MAML example by @XuehaiPan in #83.
- Add full type hints by @XuehaiPan in #92.
- Add API documentation and tutorial for implicit gradients by @Benjamin-eecs and @JieRen98 and @XuehaiPan in #73.
- Add wrapper class for functional optimizers and examples of
functorch
integration by @vmoens and @Benjamin-eecs and @XuehaiPan in #6. - Implicit differentiation support by @JieRen98 and @waterhorse1 and @XuehaiPan in #41.
- Refactor code organization by @XuehaiPan in #92 and #100.
- Fix implicit MAML omniglot few-shot classification example by @XuehaiPan in #108.
- Align results of distributed examples by @XuehaiPan in #95.
- Fix
None
in module containers by @XuehaiPan. - Fix backward errors when using inplace
sqrt_
andadd_
by @Benjamin-eecs and @JieRen98 and @XuehaiPan. - Fix LR scheduling by @XuehaiPan in #76.
- Fix the step count tensor (
shape=(1,)
) can change the shape of the scalar updates (shape=()
) by @XuehaiPan in #71.
0.5.0 - 2022-09-05
- Implement AdamW optimizer with masking by @Benjamin-eecs and @XuehaiPan in #44.
- Add half float support for accelerated OPs by @XuehaiPan in #67.
- Add MAML example with TorchRL integration by @vmoens and @Benjamin-eecs in #12.
- Add optional argument
params
to update function in gradient transformations by @XuehaiPan in #65. - Add option
weight_decay
option to optimizers by @XuehaiPan in #65. - Add option
maximize
option to optimizers by @XuehaiPan in #64. - Refactor tests using
pytest.mark.parametrize
and enabling parallel testing by @XuehaiPan and @Benjamin-eecs in #55. - Add maml-omniglot few-shot classification example using functorch.vmap by @Benjamin-eecs in #39.
- Add parallel training on one GPU using functorch.vmap example by @Benjamin-eecs in #32.
- Add question/help/support issue template by @Benjamin-eecs in #43.
- Align argument names with PyTorch by @XuehaiPan in #65.
- Replace JAX PyTrees with OpTree by @XuehaiPan in #62.
- Update image link in README to support PyPI rendering by @Benjamin-eecs in #56.
- Fix RMSProp optimizer by @XuehaiPan in #55.
- Fix momentum tracing by @XuehaiPan in #58.
- Fix CUDA build for accelerated OP by @XuehaiPan in #53.
- Fix gamma error in MAML-RL implementation by @Benjamin-eecs #47.
0.4.3 - 2022-08-08
- Bump PyTorch version to 1.12.1 by @XuehaiPan in #49.
- CPU-only build without
nvcc
requirement by @XuehaiPan in #51. - Use
cibuildwheel
to build wheels by @XuehaiPan in #45. - Use dynamic process number in CPU kernels by @JieRen98 in #42.
- Use correct Python Ctype for pybind11 function prototype @XuehaiPan in #52.
0.4.2 - 2022-07-26
- Read the Docs integration by @Benjamin-eecs and @XuehaiPan in #34.
- Update documentation and code styles by @Benjamin-eecs and @XuehaiPan in #22.
- Update tutorial notebooks by @XuehaiPan in #27.
- Bump PyTorch version to 1.12 by @XuehaiPan in #25.
- Support custom Python executable path in
CMakeLists.txt
by @XuehaiPan in #18. - Add citation information by @waterhorse1 in #14 and @Benjamin-eecs in #15.
- Implement RMSProp optimizer by @future-xy in #8.
- Use
pyproject.toml
for packaging and update GitHub Action workflows by @XuehaiPan in #31. - Rename the package from
TorchOpt
totorchopt
by @XuehaiPan in #20.
- Fixed errors while building from the source and add
conda
environment recipe by @XuehaiPan in #24.
0.4.1 - 2022-04-15
- Fix set devices bug for multi-GPUs.
0.4.0 - 2022-04-09
- The first beta release of TorchOpt.
- TorchOpt with L2R, LOLA, MAML-RL, MGRL, and few-shot examples.