Skip to content

v0.1.0 release: Multi Task Fintuning Framework for Multiple base modles

Compare
Choose a tag to compare
@chencyudel chencyudel released this 27 Dec 08:20
· 78 commits to main since this release
7946e4f
  1. We released MFTCoder which supports finetuning Code Llama, Llama, Llama2, StarCoder, ChatGLM2, CodeGeeX2, Qwen, and GPT-NeoX models with LoRA/QLoRA.
  2. mft_peft_hf is based on the HuggingFace Accelerate and deepspeed framework.
    mft_atorch is based on the ATorch frameworks, which is a fast distributed training framework of LLM.