Skip to content

Releases: codefuse-ai/MFTCoder

MFTCoder v0.4.3: Bugfix

11 Jun 02:22
cc55b06
Compare
Choose a tag to compare

Bugfix: Remove default tensor board writer which may cause permission problem

P.S. If you have problem like "permission denied" of "/home/admin", please try the new fixed release v0.4.3

MFTCoder v0.4.2: Support more open source models; Support QLoRA + Deepspeed ZeRO3 / FSDP

04 Jun 04:06
d0b8457
Compare
Choose a tag to compare

Support more open source models like Qwen2, Qwen2-moe, Starcoder2, etc.
Support QLoRA + Deepspeed ZeRO3 / FSDP, which is efficient for very large models.

MFTCoder v0.3.0: Support more open source models, support Self-Paced Loss, support FSDP

19 Jan 11:16
e5243da
Compare
Choose a tag to compare

Updates:

  1. Mainly for MFTCoder-accelerate.
  2. It now supports more open source models like Mistral, Mixtral(MoE), DeepSeek-coder, chatglm3.
  3. It supports FSDP as an option.
  4. It also supports Self-paced Loss as a solution for convergence balance in Multitask Fine-tuning.

v0.1.0 release: Multi Task Fintuning Framework for Multiple base modles

27 Dec 08:20
7946e4f
Compare
Choose a tag to compare
  1. We released MFTCoder which supports finetuning Code Llama, Llama, Llama2, StarCoder, ChatGLM2, CodeGeeX2, Qwen, and GPT-NeoX models with LoRA/QLoRA.
  2. mft_peft_hf is based on the HuggingFace Accelerate and deepspeed framework.
    mft_atorch is based on the ATorch frameworks, which is a fast distributed training framework of LLM.