Cherry pick Change default for always_save_context to True (11014)
into r2.0.0
#5806
Triggered via pull request
October 24, 2024 09:53
Status
Cancelled
Total duration
1m 20s
Artifacts
–
cicd-main.yml
on: pull_request
pre-flight
0s
ASR_dev_run-part_two_Speech_to_Text_WPE_-_Squeezeformer
/
main
ASR_dev_run_Speech_Pre-training_-_CitriNet
/
main
ASR_dev_run_Speech_To_Text_Finetuning
/
main
ASR_dev_run_Speech_to_Text
/
main
ASR_dev_run_Speech_to_Text_WPE_-_CitriNet
/
main
ASR_dev_run_Speech_to_Text_WPE_-_Conformer
/
main
L0_Unit_Tests_CPU_Hydra
/
main
L0_Unit_Tests_CPU_LLM
/
main
L0_Unit_Tests_CPU_Lightning
/
main
L0_Unit_Tests_CPU_Multimodal
/
main
L0_Unit_Tests_CPU_Others
/
main
L0_Unit_Tests_GPU_Common
/
main
L0_Unit_Tests_GPU_Hydra
/
main
L0_Unit_Tests_GPU_LLM
/
main
L0_Unit_Tests_GPU_Multimodal
/
main
L0_Unit_Tests_GPU_Others
/
main
L2_ASR_Adapters_Linear_Adapters
/
main
L2_ASR_Adapters_RelPos_MHA_Adapters
/
main
L2_ASR_Multi-dataloader_dev_run_Speech_to_Label_multi-dataloader
/
main
L2_ASR_Multi-dataloader_dev_run_Speech_to_Text_multi-dataloader
/
main
L2_BioMegatron_Bert_NER_Task
/
main
L2_Community_LLM_Checkpoints_tests_Bert
/
main
L2_Community_LLM_Checkpoints_tests_Falcon
/
main
L2_Community_LLM_Checkpoints_tests_Llama
/
main
L2_Community_LLM_Checkpoints_tests_Mamba2
/
main
L2_Community_LLM_Checkpoints_tests_StarCoder
/
main
L2_Community_vita_Checkpoints_tests_Llama3
/
main
L2_Duplex_Text_Normalization_with_Tarred_dataset
/
main
L2_G2P_Models_G2P_Conformer_training_evaluation_and_inference
/
main
L2_G2P_Models_HeteronymClassificationModel_training_evaluation_and_inference
/
main
L2_Intent_and_Slot_Classification_Tasks_Intent_and_Slot_Classification
/
main
L2_Intent_and_Slot_Classification_Tasks_Multi-Label_Intent_and_Slot_Classification
/
main
L2_Legacy_Megatron_RETRO_Pretraining_and_Resume_Training
/
main
L2_Megatron_BART_Perceiver_MIM_Training_TP2
/
main
L2_Megatron_BART_Pretraining_and_Resume_Training_PP2
/
main
L2_Megatron_BART_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_Bert_Pretraining_and_Resume_Training
/
main
L2_Megatron_Bert_Pretraining_and_Resume_Training_with_Pipeline_Parallelism
/
main
L2_Megatron_Change_Partitions_Increase_TP_Num_Partitions_-2_to_4-_and_PP_Num_Partitions_-1_to_2
/
main
L2_Megatron_Change_Partitions_Reduce_TP_Num_Partitions_-2_to_1-_and_PP_Num_Partitions_-1_to_2
/
main
L2_Megatron_Core_Bert_Pretraining_and_Resume_Training
/
main
L2_Megatron_Core_T5_Eval
/
main
L2_Megatron_Core_T5_PEFT_Lora_TP2
/
main
L2_Megatron_Core_T5_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_GPT_Eval
/
main
L2_Megatron_GPT_Eval_PP2
/
main
L2_Megatron_GPT_Finetuning_PP2
/
main
L2_Megatron_GPT_Finetuning_StarCoder_PP1
/
main
L2_Megatron_GPT_PEFT_Lora_PP2_O2
/
main
L2_Megatron_GPT_PEFT_Lora_TP2SP1
/
main
L2_Megatron_GPT_PEFT_Lora_TP2_O1
/
main
L2_Megatron_GPT_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_GPT_SFT_Eval_inference_seq_len_greaterThan_training_seq_len
/
main
L2_Megatron_GPT_with_ALiBi_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_GPT_with_KERPLE_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_GPT_with_ResetLR_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_GPT_with_Rope_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_Mock_Data_Generation_MockGPTDataset
/
main
L2_Megatron_Mock_Data_Generation_MockT5Dataset
/
main
L2_Megatron_NMT_Training_TP2
/
main
L2_Megatron_RETRO_Pretraining_and_Resume_Training
/
main
L2_Megatron_T5_Eval
/
main
L2_Megatron_T5_PEFT_Lora_TP2
/
main
L2_Megatron_T5_Pretraining_and_Resume_Training_PP2
/
main
L2_Megatron_T5_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_T5_w_Mixture_of_Expert_Pretraining
/
main
L2_Megatron_T5_with_ALiBi_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_T5_with_KERPLE_Pretraining_and_Resume_Training_TP2
/
main
L2_Megatron_UL2_Pretraining_and_Resume_Training_TP2
/
main
L2_NMT_Attention_is_All_You_Need_Finetuning
/
main
L2_NMT_Attention_is_All_You_Need_Inference
/
main
L2_NMT_Attention_is_All_You_Need_Training_NMT_Multi-Validation
/
main
L2_NMT_Attention_is_All_You_Need_Training_NMT_Training_Post-LN
/
main
L2_NMT_Attention_is_All_You_Need_Training_NMT_Training_Pre-LN
/
main
L2_NMT_Tarred_Dataset_Creation_Auto_Tarred_Dataset_Creation
/
main
L2_NMT_Tarred_Dataset_Creation_Script_Tarred_Dataset_Creation
/
main
L2_NeMo_2_GPT_LoRA_TP1PP1_MBS1
/
main
L2_NeMo_2_GPT_LoRA_TP1PP1_MBS2
/
main
L2_NeMo_2_GPT_LoRA_TP1PP2_MBS2
/
main
L2_NeMo_2_GPT_LoRA_TP2PP1_MBS2
/
main
L2_NeMo_2_GPT_Pretraining_no_transformer_engine
/
main
L2_NeMo_2_GPT_SFT_TP1PP1_MBS1
/
main
L2_NeMo_2_GPT_SFT_TP1PP1_MBS2
/
main
L2_NeMo_2_GPT_SFT_TP1PP2_MBS2
/
main
L2_NeMo_2_GPT_SFT_TP2PP1_MBS2
/
main
L2_Parallel_NLP_Examples2_Evaluation_script_for_Punctuation
/
main
L2_Parallel_NLP_Examples2_Evaluation_script_for_Token_Classification
/
main
L2_Parallel_NLP_Examples2_NER_finetuning_from_pretrained_Test
/
main
L2_Parallel_NLP_Examples2_NER_with_TurkuNLP__bert-base-finnish-cased-v1
/
main
L2_Parallel_NLP_Examples2_Punctuation_and_capitalization_finetuning_from_pretrained_test
/
main
L2_Pretraining_BERT_from_Preprocessed
/
main
L2_Pretraining_BERT_pretraining_from_Text
/
main
L2_RAG_Pipeline_Generating
/
main
L2_RAG_Pipeline_Indexing
/
main
L2_Segmentation_Tool_Parallel_ctc_segmentation_test_L2_Eng_CitriNet_with_wav
/
main
L2_Segmentation_Tool_Parallel_ctc_segmentation_test_L2_Ru_QN_with_mp3
/
main
L2_Speaker_dev_run_Clustering_Diarizer_Inference
/
main
L2_Speaker_dev_run_Multispeaker_ASR_Data_Simulation
/
main
L2_Speaker_dev_run_Neural_Diarizer_Inference
/
main
L2_Speaker_dev_run_Speaker_Diarization
/
main
L2_Speaker_dev_run_Speaker_Diarization_with_ASR_Inference
/
main
L2_Speaker_dev_run_Speaker_Recognition
/
main
L2_Speaker_dev_run_Speech_to_Label
/
main
L2_Speech_Transcription_Speech_to_Text_Transcribe
/
main
L2_Speech_to_Text_EMA
/
main
L2_TTS_Fast_dev_runs_1_FastPitch
/
main
L2_TTS_Fast_dev_runs_1_Hifigan
/
main
L2_TTS_Fast_dev_runs_1_Mixer-TTS
/
main
L2_TTS_Fast_dev_runs_1_Tacotron_2
/
main
L2_TTS_Fast_dev_runs_1_WaveGlow
/
main
Speech_Checkpoints_tests
/
main
L0_Setup_Test_Data_And_Models
/
main
L2_Community_LLM_Checkpoints_tests_Llama3
/
main
L2_Distill_Llama2
/
main
L2_Megatron_GPT_Reranker
/
main
L2_PTQ_Llama2_Export_Only
/
main
L2_PTQ_Llama2_FP8
/
main
L2_Speech_Batch_Size_OOMptimizer
/
main
L2_Speech_Batch_Size_OOMptimizer_Canary
/
main
L2_Speech_Estimate_Duration_Bins
/
main
L2_Speech_Transcription_Canary_Transcribe_Audio_Dir
/
main
L2_Speech_Transcription_Canary_Transcribe_Full_Manifest
/
main
L2_Speech_Transcription_Canary_Transcribe_With_Prompt
/
main
L2_Speech_to_Text_AED
/
main
OPTIONAL_ASR_dev_run_Speech_To_Text_HF_Finetuning
/
main
OPTIONAL_L0_Unit_Tests_CPU_ASR
/
main
OPTIONAL_L0_Unit_Tests_CPU_Audio
/
main
OPTIONAL_L0_Unit_Tests_CPU_Common
/
main
OPTIONAL_L0_Unit_Tests_CPU_Core
/
main
OPTIONAL_L0_Unit_Tests_CPU_NLP
/
main
OPTIONAL_L0_Unit_Tests_CPU_TTS
/
main
OPTIONAL_L0_Unit_Tests_GPU_ASR
/
main
OPTIONAL_L0_Unit_Tests_GPU_Audio
/
main
OPTIONAL_L0_Unit_Tests_GPU_Core
/
main
OPTIONAL_L0_Unit_Tests_GPU_Lightning
/
main
OPTIONAL_L0_Unit_Tests_GPU_NLP
/
main
OPTIONAL_L0_Unit_Tests_GPU_TTS
/
main
OPTIONAL_L2_Megatron_GPT_Embedding
/
main
OPTIONAL_L2_NeMo_2_GPT_DDP_Param_Parity_check
/
main
OPTIONAL_L2_NeMo_2_SSM_Finetuning
/
main
OPTIONAL_L2_NeMo_2_SSM_Pretraining
/
main
OPTIONAL_L2_PTQ_Llama2_INT8_SQ
/
main
OPTIONAL_L2_Stable_Diffusion_Training
/
main
OPTIONAL_L2_Transducer_alignment_Running_pytest
/
main
Optional_L2_Megatron_GPT_Pretraining_and_Resume_Training_PP2
/
main
Nemo_CICD_Test
2s
Annotations
2 errors
pre-flight
Canceling since a higher priority waiting request for 'CICD NeMo-11020' exists
|
Nemo_CICD_Test
Process completed with exit code 1.
|