Releases: stanfordnlp/pyreft
Releases · stanfordnlp/pyreft
v0.0.8
Full Changelog: v0.0.7...v0.0.8
v0.0.7
What's Changed
- [P1] Addressing issues in DPO Training (#127) by @AmirZur in #128
- Debug subspace composition notebook impl by @PinetreePantry in #130
- Check in negated reft notebook by @PinetreePantry in #131
- [Minor] Update notebook with newer names (#132) by @frankaging in #133
- [Minor] fix undefined var by @frankaging in #134
Full Changelog: v0.0.6...v0.0.7
v0.0.6: Minor updates and bug fix
What's Changed
- [Minor] Update README with an example. by @frankaging in #55
- [Minor] Update README with Colab links. by @frankaging in #56
- [Minor] Update README by @frankaging in #57
- [Major] Support Llama3 models by @frankaging in #64
- [Minor] More refactory to support Llama3 experiments by @frankaging in #74
- [Minor] fix subspace (#72) by @frankaging in #75
- ReFT + DPO Tutorial by @AmirZur in #76
- Title: Fix: Shape Mismatch during Left Padding Adjustment in compute_metrics (Generated by Ana - AI SDE) by @ana-ai-sde in #89
- [P1] support ReFT+PEFT by using ReftModel to wrap PeftModel (#46) by @frankaging in #93
- [Minor] Enable lora with loreft training by @frankaging in #94
- [Minor] Basic support of quantization by @frankaging in #100
- [P0] Revert back to ortho init as unstable training by @frankaging in #103
- Fix: datasets.exceptions.DatasetNotFoundError when training with alpaca_data_cleaned by @savadikarc in #108
- [P0] Fixing LoReFT rotation layer hot loading problem (#114) by @frankaging in #123
New Contributors
- @AmirZur made their first contribution in #76
- @ana-ai-sde made their first contribution in #89
- @savadikarc made their first contribution in #108
Full Changelog: v0.0.5...v0.0.6
v0.0.5: Zeta version release
What's Changed
- Fix GitHub links to standfordnlp in the README files by @bbrowning in #39
- Update README.md by @Vikrant-Khedkar in #47
- Update chat_model.ipynb by @Vikrant-Khedkar in #48
- Fix loading IntervenableModel for its subclasses by @PinetreePantry in #49
- [Major] Zeta version by @aryamanarora in #44
New Contributors
- @bbrowning made their first contribution in #39
- @Vikrant-Khedkar made their first contribution in #47
Full Changelog: v0.0.4...v0.0.5
v0.0.4
What's Changed
- [P0] Fixing the requirements for Kaggle and Google notebooks env by @frankaging in #38
Full Changelog: v0.0.3...v0.0.4
v0.0.3
Fix requirements.
Full Changelog: v0.0.2...v0.0.3
v0.0.2: Cleanup
Full Changelog: v0.0.1...v0.0.2
v0.0.1: initial release
What's Changed
- separate data processing to data.py by @aryamanarora in #1
- Initial attempt to adapt to HF trainer by @aryamanarora in #2
- add weight decay param by @aryamanarora in #3
- [Bug fix] fix layer parsing step after dataset creation and others by @frankaging in #4
- separate argparse from training fxn by @aryamanarora in #8
- change math and commonsense to LLM adaptor template by @frankaging in #7
- Adding in stsb support by @frankaging in #10
- add an option for normalized input; GLUE in training HF eval by @frankaging in #11
- gsm8k splits by @aryamanarora in #12
- sharing interventions across positions by @frankaging in #13
- fix padding on intervention locations by @aryamanarora in #15
- more update on the padding thing with gsm8k and others by @frankaging in #16
- add gd option by @frankaging in #17
- adjust decode stra by @frankaging in #18
- Zen/gsm8k by @frankaging in #19
- minor fix by @frankaging in #20
- move generation args to config file by @aryamanarora in #21
- Update README.md by @PinetreePantry in #22
- Verified README code, fix a bug preventing proper save and load by @PinetreePantry in #23
- Update README.md by @eltociear in #32
New Contributors
- @aryamanarora made their first contribution in #1
- @frankaging made their first contribution in #4
- @PinetreePantry made their first contribution in #22
- @eltociear made their first contribution in #32
Full Changelog: https://github.com/stanfordnlp/pyreft/commits/v0.0.1