Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Latest commit

 

History

History
112 lines (71 loc) · 5.1 KB

README.md

File metadata and controls

112 lines (71 loc) · 5.1 KB

Environment

  • This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1

  • Prepare environment

    pip install -r requirements.txt

Data

GLUE Data

  • Download the GLUE data by running this script and unpack it to directory GLUE_DIR

  • TASK_NAME can be one of CoLA, SST-2, MRPC, STS-B, QQP, MNLI, QNLI, RTE.

    ${GLUE_DIR}/${TASK_NAME}

Wiki Data

  • The original Wiki data used in this project can be found here (It is called 'Raw text from English Wikipedia for general distillation')

  • The processed Wiki data can be generated by the method in TinyBERT using the following script from this repository

    python pregenerate_training_data.py --train_corpus ${CORPUS_RAW} \ --bert_model ${BERT_BASE_DIR}$
    --reduce_memory --do_lower_case
    --epochs_to_generate 3
    --output_dir ${CORPUS_JSON_DIR}$

    ${BERT_BASE_DIR}$ includes the BERT-base teacher model, e.g., BERT-base-uncased

(i) Example on MNLI data (task-agnostic)

First train model on MNLI. Then finetune it on RTE. Finally evaluate it on RTE.

Step 0. Go to the configuration folder

cd Code/run_yaml/

Step 1. Training on MNLI (task-agnostic)

  • $$AMLT_DATA_DIR/Data_GLUE/glue_data/{TASK_NAME}/ is data folder
  • $$AMLT_DATA_DIR/Local_models/pretrained_BERTs/BERT_base_uncased/ contains the teacher and student initialization.
  • Please create model_dir and download the pretrained BERT_base_uncased and put it here

Train_NoAssist_SeriesEpochs_NoHard_PreModel_RndSampl.yaml

Step 2. Finetuning on RTE

  • $$AMLT_DATA_DIR/Outputs/glue/MNLI/NoAssist/All_NoAug_NoHardLabel_PreModel/ contains the models trained on MNLI
  • Epochs_{Epochs_TrainMNLI} is the different model trained on MNLI
  • Please create the folder of $$AMLT_DATA_DIR/Outputs/glue/MNLI/NoAssist/All_NoAug_NoHardLabel_PreModel/ and put the output models of Step 1 here

Train_finalfinetuning_SpecificSubs_SeriesEpochs_NoAssist_NoHardLabel_PretrainedModel.yaml

Step 3. Evaluation on RTE

  • $$AMLT_DATA_DIR/Outputs/glue/{TASK_NAME}/NoAssist/All_FINETUNING_NoHardLabel_PreModel/SpecificSubs/ contains the models finetuned on RTE
  • FinetuneEpochs_{Finetune_Epochs}EpochsMNLI{Epochs_TrainMNLI}Sub{Subs} is the different model finetuned on RTE
  • Please create the folder of $$AMLT_DATA_DIR/Outputs/glue/{TASK_NAME}/NoAssist/All_FINETUNING_NoHardLabel_PreModel/SpecificSubs/ and put the output models of Step 2 here

Evaluate_SpecificSubs_NoAssist_NoHardLabel_PretrainedModel.yaml

(ii) Example on Wiki data

First train model on Wiki. Then finetune it on RTE. Finally evaluate it on RTE.

Step 0. Go to the configuration folder

cd Code/run_yaml/

Step 1. Training on Wiki

  • $$AMLT_DATA_DIR/English_Wiki/corpus_jsonfile_for_general_KD/ contains the processed Wiki data

Train_wiki_NoAssist_NoHard_PreModel_RndSampl.yaml

Step 2. Finetuning on RTE

Step 3. Evaluation on RTE