Releases: adapter-hub/adapters
Adapters v1.0.1
This version is built for Hugging Face Transformers v4.45.x.
New
- Add example notebook for ReFT training (@julian-fong via #741)
Changes
Fixed
Adapters v1.0.0
Blog post: https://adapterhub.ml/blog/2024/08/adapters-update-reft-qlora-merging-models
This version is built for Hugging Face Transformers v4.43.x.
New Adapter Methods & Model Support
- Add Representation Fine-Tuning (ReFT) implementation (LoReFT, NoReFT, DiReFT) (@calpt via #705)
- Add LoRA weight merging with Task Arithmetics (@lenglaender via #698)
- Add Whisper model support + notebook (@TimoImhof via #693; @julian-fong via #717)
- Add Mistral model support (@KorventennFR via #609)
- Add PLBart model support (@FahadEbrahim via #709)
Breaking Changes & Deprecations
- Remove support for loading from archived Hub repository (@calpt via #724)
- Remove deprecated add_fusion() & train_fusion() methods (@calpt via #714)
- Remove deprecated arguments in
push_adapter_to_hub()
method (@calpt via #724) - Deprecate support for passing Python lists to adapter activation (@calpt via #714)
Minor Fixes & Changes
Adapters v0.2.2
Adapters v0.2.1
Adapters v0.2.0
This version is built for Hugging Face Transformers v4.39.x.
New
- Add support for QLoRA/ QAdapter training via bitsandbytes (@calpt via #663): Notebook Tutorial
- Add dropout to bottleneck adapters (@calpt via #667)
Changed
- Upgrade supported Transformers version (@lenglaender via #654; @calpt via #686)
- Deprecate Hub repo in docs (@calpt via #668)
- Switch resolving order if source not specified in load_adapter() (@calpt via #681)
Fixed
- Fix DataParallel training with adapters (@calpt via #658)
- Fix embedding Training Bug (@hSterz via #655)
- Fix fp16/ bf16 for Prefix Tuning (@calpt via #659)
- Fix Training Error with AdapterDrop and Prefix Tuning (@TimoImhof via #673)
- Fix default cache path for adapters loaded from AH repo (@calpt via #676)
- Fix skipping composition blocks in not applicable layers (@calpt via #665)
- Fix Unipelt Lora default config (@calpt via #682)
- Fix compatibility of adapters with HF Accelerate auto device-mapping (@calpt via #678)
- Use default head dropout prob if not provided by model (@calpt via #685)
Adapters v0.1.2
Adapters v0.1.1
This version is built for Hugging Face Transformers v4.35.x.
New
Fixed
- Fix error in push_adapter_to_hub() due to deprecated args (@calpt via #613)
- Fix Prefix-Tuning for T5 models where d_kv != d_model / num_heads (@calpt via #621)
- [Bart] Move CLS rep extraction from EOS tokens to head classes (@calpt via #624)
- Fix adapter activation with
skip_layers
/ AdapterDrop training (@calpt via #634)
Docs & Notebooks
Adapters 0.1.0
Blog post: https://adapterhub.ml/blog/2023/11/introducing-adapters/
With the new Adapters library, we fundamentally refactored the adapter-transformers library and added support for new models and adapter methods.
This version is compatible with Hugging Face Transformers version 4.35.2.
For a guide on how to migrate from adapter-transformers to Adapters have a look at https://docs.adapterhub.ml/transitioning.md.
Changes are given compared to the latest adapters-transformers v3.2.1.
New Models & Adapter Methods
- Add LLaMA model integration (@hSterz)
- Add X-MOD model integration (@calpt via #581)
- Add Electra model integration (@hSterz via #583, based on work of @amitkumarj441 and @pauli31 in #400)
- Add adapter output & parameter averaging (@calpt)
- Add Prompt Tuning (@lenglaender and @calpt via #595)
- Add Composition Support to LoRA and (IA)³ (@calpt via #598)
Breaking Changes
- Renamed bottleneck adapter configs and config strings. The new names can be found here: https://docs.adapterhub.ml/overview.html (@calpt)
- Removed the XModelWithHeads classes (@lenglaender) (XModelWithHeads have been deprecated since adapter-transformers version 3.0.0)
Changes Due to the Refactoring
- Refactored the implementation of all already supported models (@calpt, @lenglaender, @hSterz, @TimoImhof)
- Separate the model config (
PretrainedConfig
) from the adapters config (ModelAdaptersConfig
) (@calpt) - Updated the whole documentation, Jupyter Notebooks and example scripts (@hSterz, @lenglaender, @TimoImhof, @calpt)
- Introduced the
load_model
function to load models containing adapters. This replaces the Hugging Facefrom_pretrained
function used in theadapter-transformers
library (@lenglaender) - Sharing more logic for adapter composition between different composition blocks (@calpt via #591)
- Added Backwards Compatibility Tests which allow for testing if adaptations of the codebase, such as Refactoring, impair the functionality of the library (@TimoImhof via #596)
- Refactored the EncoderDecoderModel by introducing a new mixin (
ModelUsingSubmodelsAdaptersMixin
) for models that contain other models (@lenglaender) - Rename the class
AdapterConfigBase
intoAdapterConfig
(@hSterz via #603)
Fixes and Minor Improvements
- Fixed EncoderDecoderModel generate function (@lenglaender)
- Fixed deletion of invertible adapters (@TimoImhof)
- Automatically convert heads when loading with XAdapterModel (@calpt via #594)
- Fix training T5 adapter models with Trainer (@calpt via #599)
- Ensure output embeddings are frozen during adapter training (@calpt #537)
adapter-transformers v.3.2.1
This is the last release of adapter-transformers
. See here for the legacy codebase: https://github.com/adapter-hub/adapter-transformers-legacy.
Based on transformers v4.26.1
Fixed
- Fix compacter init weights (@hSterz via #516)
- Restore compatibility of GPT-2 weight initialization with Transformers (@calpt via #525)
- Restore Python 3.7 compatibility (@lenglaender via #510)
- Fix LoRA & (IA)³ implementation for Bart & MBart (@calpt via #518)
- Fix
resume_from_checkpoint
inAdapterTrainer
class (@hSterz via #514)
adapter-transformers v3.2.0
Based on transformers v4.26.1
New
New model integrations
- Add BEiT integration (@jannik-brinkmann via #428, #439)
- Add GPT-J integration (@ChiragBSavani via #426)
- Add CLIP integration (@calpt via #483)
- Add ALBERT integration (@lenglaender via #488)
- Add BertGeneration (@hSterz via #480)
Misc
- Add support for adapter configuration strings (@calpt via #465, #486)
This enables you to easily configure adapter configs. To create a Pfeiffer adapter with reduction factor 16 you can know usepfeiffer[reduction_factor=16]
. Especially for experiments using different hyperparameters or the example scripts, this can come in handy. Learn more - Add for
Stack
,Parallel
&BatchSplit
composition to prefix tuning (@calpt via #476)
In previousadapter-transformers
versions, you could combine multiple bottleneck adapters. You could use them in parallel or stack them. Now, this is also possible for prefix-tuning adapters. Add multiple prefixes to the same model to combine the functionality of multiple adapters (Stack) or perform several tasks simultaneously (Parallel, BatchSplit) Learn more - Enable parallel sequence generation with adapters (@calpt via #436)
Changed
- Removal of the
MultiLingAdapterArguments
class. Use theAdapterArguments
class andsetup_adapter_training
method instead. Learn more. - Upgrade of underlying transformers version to 4.26.1 (@calpt via #455, @hSterz via #503)
Fixed
- Fixes for GLUE & dependency parsing example script (@calpt via #430, #454)
- Fix access to shared parameters of compacter (e.g. during sequence generation) (@calpt via #440)
- Fix reference to adapter configs in
T5EncoderModel
(@calpt via #437) - Fix DeBERTa prefix tuning with enabled relative attention (@calpt via #451)
- Fix gating for prefix tuning layers (@calpt via #471)
- Fix input to T5 adapter layers (@calpt via #479)
- Fix AdapterTrainer hyperparameter tuning (@dtuit via #482)
- Move loading best adapter to AdapterTrainer class (@MaBeHen via #487)
- Make HuggingFace Hub Mixin work with newer utilities (@Helw150 via #473)
- Only compute fusion reg loss if fusion layer is trained (@calpt via #505)