Skip to content

Commit

Permalink
NeMo Multimodal Docs and Tests Initial PR (#8028)
Browse files Browse the repository at this point in the history
* [TTS] Fix FastPitch data prep tutorial (#7524)

Signed-off-by: Ryan <[email protected]>

* add italian tokenization (#7486)

* add italian tokenization

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add more ipa lexicon it

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix error deletion

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* add test

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: GiacomoLeoneMaria <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Replace None strategy with auto in tutorial notebooks (#7521) (#7527)

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* unpin setuptools (#7534) (#7535)

Signed-off-by: fayejf <[email protected]>
Co-authored-by: fayejf <[email protected]>

* remove auto generated examples (#7510)

* explicitly remove autogenerated examples for data parallel evaluation

Signed-off-by: arendu <[email protected]>

* mark autogenrated and remove it for test

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: arendu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add the `strategy` argument to `MegatronGPTModel.generate()` (#7264)

It is passed as an explicit argument rather than through
`**strategy_args` so as to ensure someone cannot accidentally pass other
arguments that would end up being ignored.

It is a keyword-only argument to ensure that if in the future we want to
update the signature to `**strategy_args`, we can do it without breaking
code.

Signed-off-by: Olivier Delalleau <[email protected]>

* Fix PTL2.0 related ASR bugs in r1.21.0: Val metrics logging, None dataloader issue (#7531) (#7533)

* fix none dataloader issue ptl2



* ptl2.0 logging fixes for rnnt_models



---------

Signed-off-by: KunalDhawan <[email protected]>
Co-authored-by: Kunal Dhawan <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>

* gpus -> devices (#7542) (#7545)

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao <[email protected]>

* Update FFMPEG version to fix issue with torchaudio (#7551) (#7553)

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>

* PEFT GPT & T5 Refactor (#7308)

* initial implementation of add_adapters API

* correct type hint

* Add config in add_adapters for save and load (@author bobchen)

* Remove AdapterConfig to avoid import error

* Add AdaterConfig back and move adaptermixin to sft model

* Add NLPSaveRestoreConnector as default in NLPModel.restore_from

* Add restore_from_nemo_with_adapter and test script

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* rename t5 file and classes to be consistent with GPT

* add t5 sft dataset

* add support for single-file format with T5SFTDataset

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Various small changes to make T5 SFT work like GPT SFT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add adapter evaluation test script

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add MultiAdaterConfig for ia3 and fix builder issue

* Make ptuning for T5SFTModel work using mixin

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add IA3_Adapter for AdapterName

* Add adapter name for ptuning and attention adapter

* Make test script GPT/T5 agnostic

* Add layer selection feature

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Integrate adapter name and config

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update gpt peft tuning script to new API

* add t5 peft tuning script with new API

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix IA3 layer selection issue

* Override state_dict on SFT model instead of mixin

* Add load adapter by adapter config

* move peft config map away from example script

* auto get config from nemo adapter

* Move PEFTConfig to new file

* fix ckpt save/load for t5

* name change: add_adapters -> add_adapter

* variable name change

* update t5 script

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix t5 issues

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add weight tying

* update gpt tuning script

* PEFT-API proposal

* Fix according to comments

* update tuning scripts

* move merge_cfg_with to mixin class since it applies to both gpt and t5 and requires the model class for restore

* Add mcore_gpt support for NLPAdapterMixin

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix typo

* variable name change to distinguish "peft" and "adapter"

* override `load_adapters` to support `add_adapter` name change

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update tuning and eval script for adapter save/load

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add Ptuning on first stage only

* add lora tutorial for review

* Fix layer selection for mcore

* add landing page

* fix resume training

Signed-off-by: jasonwan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add mcore condition in sharded_state_dict to make sft work

* Update lora_tutorial.md

First edit of this file for PEFT documentation for NeMO

Signed-off-by: hkelly33 <[email protected]>

* rename Adapter to AttentionAdapter to avoid confusion in doc

* Change load_adapters to load .nemo

* add quick start guide

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add load_adapters with .ckpt

* Remove setup_complete changes in load_adapters

* update landing page

* remove typo

* Updated quick_start.md per Chen Cui

Signed-off-by: hkelly33 <[email protected]>

* Add inference config merger and tutorial

* Add doc string for NLPAdapterModelMixin and deprecated warning on MegatronGPTPEFTModel

* add supported_methods.md and update other documentations

* Update supported_methods.md

minor updates.

Signed-off-by: Adi Renduchintala <[email protected]>

* Update landing_page.md

minor update.

Signed-off-by: Adi Renduchintala <[email protected]>

* Modify doc string for NLPAdapterModelMixin

* Add doc string add_adapters in NLPAdapterModelMixin

* rename canonical adapters

* remove mcore hard dependency

* [PATCH] move microbatch calculator to nemo from apex

* remove apex dependency in gpt and t5 sft models

* remove apex dependency in gpt model

* render doc strings

* fix

* Add missing virtual_tokens on ptuning

* fix docstrings

* update gpt-style model coverage in docs

* update docstring

* Remove pdb

* add lightning_fabric to make docstring rendering work

* Add Ptuning missing key

* try docstring rendering

* Fix ptuning issue

* update gpt t5 peft tuning and eval scripts

* typos

* update eval config

* fix bug relating to apex dependency removal

* typo

* make predict step behave the same as test step

* make lora tutorial work in notebook

* cosmetics

* update yaml scripts

* mcore_gpt attribute optional

* typo

* update eval scripts and fix T5 eval bugs

* add NLPDDPStrategyNotebook and trainer builder logic to use it

* update lora notebook to use new trainer builder

* fix microbatch calculator bug for inference after training

* Convert markdown files to RST and incorporate with doc

* typo

* revise language

* remove extra cell

* remove unnecessary inheritance

* remove old tests

* move layer selection default so logging messages make sense

* remove `save_adapters` as adapter weights are saved automatically during training

* initialize weights from a checkpoint instead of randomly

* multiple fields can form a context (#7147)

* list of context fields and flexible prompt template

Signed-off-by: arendu <[email protected]>

* list of fields for context

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Fix bug

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Add multiple truncation fields and middle truncation

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Compatible to old ckpt

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix tokenize detokenize issue

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove detokenization, add truncation augmentation

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Resolve comments

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove unused import

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* revert eos

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Add tokenizer space_sensitive attribute

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix error

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Fix erorr and use re

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Change assert logic

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Follow adi suggestion

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove merge function

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add example and comment

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove context_key and add comment

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove random truncation

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix template none

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

---------

Signed-off-by: arendu <[email protected]>
Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>

* revert config changes

* remove accidental breakpoint

* support TP>1 loading

* infer adapter type from checkpoint in during eval

* breakup add adapter

* enable interpolation of train_ds and validation_ds

* update metric calc script to conform to single-file eval format

* remove extraneous print

* update lora notebook for updated merge_inference_cfg

* Update nlp_adapter_mixins.py

variable name change

Signed-off-by: Chen Cui <[email protected]>

* turn off grad scaler for PP to match old scripts

* remove PEFTSaveRestoreConnector since functionality all covered by the new mixin class

* remove resume_from_checkpoint check since covered in #7335

* revert changes made in eval config interpolation

* more interpolation

* typo

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove dup line

Signed-off-by: Chen Cui <[email protected]>

* code style warnings

Signed-off-by: Chen Cui <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix config mistake

Signed-off-by: Chen Cui <[email protected]>

* add copyright header

Signed-off-by: Chen Cui <[email protected]>

* fix code check warnings

Signed-off-by: Chen Cui <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* revert changes to remove apex dependency (mixed apex+nemo microbatch calculator broke some CI tests)

Signed-off-by: Chen Cui <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add more deprecation notices

Signed-off-by: Chen Cui <[email protected]>

* update deprecation notices

Signed-off-by: Chen Cui <[email protected]>

* update deprecation notices

Signed-off-by: Chen Cui <[email protected]>

* consolidate peft and sft scripts

Signed-off-by: Chen Cui <[email protected]>

* update CI tests

Signed-off-by: Chen Cui <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* notebook branch points to main to prepare for merge

Signed-off-by: Chen Cui <[email protected]>

* fix gpt and t5 validation with any metric other than loss

Signed-off-by: Chen Cui <[email protected]>

* support pre-extracted checkpoints

Signed-off-by: Chen Cui <[email protected]>

---------

Signed-off-by: jasonwan <[email protected]>
Signed-off-by: hkelly33 <[email protected]>
Signed-off-by: Adi Renduchintala <[email protected]>
Signed-off-by: arendu <[email protected]>
Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Signed-off-by: Chen Cui <[email protected]>
Co-authored-by: Chen Cui <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Marc Romeyn <[email protected]>
Co-authored-by: jasonwan <[email protected]>
Co-authored-by: hkelly33 <[email protected]>
Co-authored-by: Adi Renduchintala <[email protected]>
Co-authored-by: Yuanzhe Dong <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>

* fix a typo (#7496)

Signed-off-by: BestJuly <[email protected]>

* [TTS] remove curly braces from ${BRANCH} in jupyer notebook cell. (#7554) (#7560)

* remove curly braces.
* remove installation of pynini.
---------

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* add youtube embed url (#7570)

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* Remap speakers to continuous range of speaker_id for dataset AISHELL3 (#7536)

* Remap speakers to continuous range of speaker_id for dataset AISHELL3
* Add new key/value pair to record raw speaker for AISHELL3 dataset

Signed-off-by: Robin Dong <[email protected]>

---------

Signed-off-by: Robin Dong <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* fix validation_step_outputs initialization for multi-dataloader (#7546) (#7572)

* added correct validation_step_outputs initialization for mutli-dataloader



* changed kernel for display



* Update logic for validation and test step outputs



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* revert multidataloader changes in multilang ASR notebook



---------

Signed-off-by: KunalDhawan <[email protected]>
Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Kunal Dhawan <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Append output of val step to self.validation_step_outputs (#7530) (#7532)

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* [TTS] fixed trainer's accelerator and strategy. (#7569) (#7574)

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* Append val/test output to instance variable in EncDecSpeakerLabelModel (#7562) (#7573)

* Append val/test output to the instance variable in EncDecSpeakerLabelModel



* Handle test case in evaluation_step



* Replace type with isinstance



---------

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* Fix CustomProgressBar for resume (#7427) (#7522)

* Fix CustomProgress Bar for resume and multiple epochs



* Edit num_training_batches



* Use max_steps as total for progress bar for resume



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* fix typos in nfa and speech enhancement tutorials (#7580) (#7583)

Signed-off-by: Elena Rastorgueva <[email protected]>
Co-authored-by: Elena Rastorgueva <[email protected]>

* Add strategy as ddp_find_unused_parameters_true for glue_benchmark.py (#7454) (#7461)

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* update strategy (#7577) (#7578)

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao <[email protected]>

* Fix typos (#7581)

* Change hifigan finetune strategy to ddp_find_unused_parameters_true (#7579) (#7584)

* Change strategy to auto



---------

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>

* [BugFix] Add missing quotes for auto strategy in tutorial notebooks (#7541) (#7548)

* Add missing quotes for auto strategy



* Revert trainer.gpus to trainer.devices in Self_Supervised_Pre_Training.ipynb



---------

Signed-off-by: Abhishree <[email protected]>
Signed-off-by: Abhishree Thittenamane <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* add build os key (#7596) (#7599)

* add build os key



* add tools



* update to stable version



---------

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao <[email protected]>

* StarCoder SFT test + bump PyT NGC image to 23.09 (#7540)

* Add SFT StarCoder test

Signed-off-by: Jan Lasek <[email protected]>

* Remove _modify_config call as it is covered in load_from_nemo just below

Signed-off-by: Jan Lasek <[email protected]>

* Test with pyt:23.09 container

Signed-off-by: Jan Lasek <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>

* defaults changed (#7600)

* defaults changed

Signed-off-by: arendu <[email protected]>

* typo

Signed-off-by: arendu <[email protected]>

* update

Signed-off-by: arendu <[email protected]>

---------

Signed-off-by: arendu <[email protected]>

* add ItalianPhonemesTokenizer (#7587)

* add ItalianPhonemesTokenizer

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix Italian phonemes

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add test

Signed-off-by: GiacomoLeoneMaria <[email protected]>

---------

Signed-off-by: GiacomoLeoneMaria <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Xuesong Yang <[email protected]>

* best ckpt fix (#7564) (#7588)

Signed-off-by: dimapihtar <[email protected]>
Co-authored-by: Dmytro Pykhtar <[email protected]>

* Add files via upload (#7598)

specifies the branch

Signed-off-by: George <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* Fix validation in G2PModel and ThutmoseTaggerModel (#7597) (#7606)

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain (#7576) (#7586)

* Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain
* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Sangkug Lym <[email protected]>
Co-authored-by: Sangkug Lym <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Safeguard nemo_text_processing installation on ARM (#7485)

* safeguard nemo_text_processing installing

Signed-off-by: Jason <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update check

Signed-off-by: Jason <[email protected]>

---------

Signed-off-by: Jason <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Bound transformers version in requirements (#7620)

Signed-off-by: Abhishree <[email protected]>

* fix llama2 70b lora tuning bug (#7622)

* fix llama2 70b lora tuning bug

Signed-off-by: Chen Cui <[email protected]>

* Update peft_config.py

brackets

Signed-off-by: Adi Renduchintala <[email protected]>

---------

Signed-off-by: Chen Cui <[email protected]>
Signed-off-by: Adi Renduchintala <[email protected]>
Co-authored-by: Adi Renduchintala <[email protected]>

* Fix import error no module name model_utils (#7629)

Signed-off-by: Mehadi Hasan Menon <[email protected]>

* add fc large ls models (#7641)

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao Koluguri <nithinraok>

* bugfix: trainer.gpus, trainer.strategy, trainer.accelerator (#7621) (#7642)

* [TTS] bugfix for Tacotron2 tutorial due to PTL 2.0
* trainer.gpus -> trainer.devices
* fixed related tutorial bugs
---------
Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* fix ssl models ptl monitor val through logging (#7608) (#7614)

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* Fix metrics for SE tutorial (#7604) (#7612)

Signed-off-by: Ante Jukić <[email protected]>
Co-authored-by: anteju <[email protected]>

* Add ddp_find_unused_parameters=True and change accelerator to auto (#7623) (#7644)

* Add ddp_find_unused_parameters=True and change acclerator to auto



* Add ddp_find_unused_parameters True for normalization_as_tagging_train.py



---------

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

* Fix py3.11 dataclasses issue  (#7616)

* Fix py3.11 dataclasses issue  (#7582)

* Update ASR configs to support Python 3.11

Signed-off-by: smajumdar <[email protected]>

* Update TTS configs to support Python 3.11

Signed-off-by: smajumdar <[email protected]>

* Guard MeCab and Ipadic

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix remaining ASR dataclasses

Signed-off-by: smajumdar <[email protected]>

* Fix remaining ASR dataclasses

Signed-off-by: smajumdar <[email protected]>

* Fix scripts

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update name to ConfidenceMethodConfig

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain (#7576) (#7586)

* Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain
* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Sangkug Lym <[email protected]>
Co-authored-by: Sangkug Lym <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Safeguard nemo_text_processing installation on ARM (#7485)

* safeguard nemo_text_processing installing

Signed-off-by: Jason <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update check

Signed-off-by: Jason <[email protected]>

---------

Signed-off-by: Jason <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix changes to confidence measure

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <[email protected]>
Signed-off-by: Sangkug Lym <[email protected]>
Signed-off-by: Jason <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sangkug Lym <[email protected]>
Co-authored-by: Jason <[email protected]>

* [Stable Diffusion/ControlNet] Enable O2 training for SD and Fix ControlNet CI failure

* Mingyuanm/dreambooth fix

* Fix NeMo CI Infer Issue

* DreamFusion

* Move neva export changes

* Add Imagen Synthetic Dataloader

* Add VITWrapper and export stuff to wrapper

* Update neva with megatron-core support

* Fix issues with Dockerfile (#7650) (#7652)

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>

* [ASR] RNN-T greedy decoding max_frames fix for alignment and confidence (#7635)

* decoding and test fix

Signed-off-by: Aleksandr Laptev <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Aleksandr Laptev <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* [ASR] Fix type error in jasper (#7636) (#7653)

Signed-off-by: Ryan <[email protected]>
Co-authored-by: Ryan Langman <[email protected]>

* [TTS] Add STFT and SI-SDR loss to audio codec recipe (#7468)

* [TTS] Add STFT and SI-SDR loss to audio codec recipe

Signed-off-by: Ryan <[email protected]>

* [TTS] Fix STFT resolution

Signed-off-by: Ryan <[email protected]>

* [TTS] Fix training metric logging

Signed-off-by: Ryan <[email protected]>

* [TTS] Add docstring to mel and stft losses

Signed-off-by: Ryan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Ryan <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Create per.py (#7538)

* Move model precision copy (#7336)

* move cfg precision set to megatron base model

Signed-off-by: Maanu Grover <[email protected]>

* remove copy from other models

Signed-off-by: Maanu Grover <[email protected]>

* modify attribute not arg

Signed-off-by: Maanu Grover <[email protected]>

* fix gpt model test for ptl 2.0

Signed-off-by: Maanu Grover <[email protected]>

* rename function and add docstring

Signed-off-by: Maanu Grover <[email protected]>

* replace precision to dtype conditionals with func call

Signed-off-by: Maanu Grover <[email protected]>

* unnecessary function and cfg reset

Signed-off-by: Maanu Grover <[email protected]>

* set default value

Signed-off-by: Maanu Grover <[email protected]>

* fix precision lookup in a few more places

Signed-off-by: Maanu Grover <[email protected]>

* rename mapping function

Signed-off-by: Maanu Grover <[email protected]>

* ununsed import

Signed-off-by: Maanu Grover <[email protected]>

* save torch datatype to model

Signed-off-by: Maanu Grover <[email protected]>

* set weights precision wrt amp o2

Signed-off-by: Maanu Grover <[email protected]>

* Revert "set weights precision wrt amp o2"

This reverts commit 313a4bfe5eb69d771a6d2433898c0685836aef5c.

Signed-off-by: Maanu Grover <[email protected]>

* revert half precision at inference attempt

Signed-off-by: Maanu Grover <[email protected]>

* move autocast dtype to base model

Signed-off-by: Maanu Grover <[email protected]>

* move params dtype to base model, enable fp16 O2 inf

Signed-off-by: Maanu Grover <[email protected]>

* unused imports

Signed-off-by: Maanu Grover <[email protected]>

---------

Signed-off-by: Maanu Grover <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix PEFT checkpoint loading (#7388)

* Fix PEFT checkpoint loading

Signed-off-by: Jason Wang <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Jason Wang <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Use distributed optimizer support for multiple dtypes (#7359)

* Update distopt wrapper with multiple dtype support

Remove manual handling of separate FP32 optimizer.

Signed-off-by: Tim Moon <[email protected]>

* Use distopt support for contiguous buffers with multiple dtypes

Signed-off-by: Tim Moon <[email protected]>

* Fix typo

Signed-off-by: Tim Moon <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Separate distopt buckets for first GPT layer and non-overlapped params

Signed-off-by: Tim Moon <[email protected]>

* Add distopt logic for int dtypes

Signed-off-by: Tim Moon <[email protected]>

* Update Apex commit

Signed-off-by: Tim Moon <[email protected]>

* Remove unused variables

Signed-off-by: Tim Moon <[email protected]>

* Update Apex commit in README and Jenkensfile

Signed-off-by: Tim Moon <[email protected]>

* Debug Dockerfile and Jenkinsfile

Signed-off-by: Tim Moon <[email protected]>

---------

Signed-off-by: Tim Moon <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* minor fix for llama ckpt conversion script (#7387)

* minor fix for llama ckpt conversion script

Signed-off-by: Jason Wang <[email protected]>

* Update Jenkinsfile

Signed-off-by: Jason Wang <[email protected]>

* remove fast_swiglu configuration

Signed-off-by: Jason Wang <[email protected]>

---------

Signed-off-by: Jason Wang <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix wrong calling of librosa.get_duration() in notebook (#7376)

Signed-off-by: Robin Dong <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [PATCH] PEFT import mcore (#7393)

* [PATCH] PEFT import mcore

Signed-off-by: Jason Wang <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Jason Wang <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Create per.py

Script for calculation Punctuation Error Rate and related rates (correct rate, deletions rate, etc.)

Signed-off-by: Sasha Meister <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Sasha Meister <[email protected]>

* [TTS] Added a callback for logging initial data (#7384)

Signed-off-by: Ante Jukić <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Update Core Commit (#7402)

* Update Core Commit

Signed-off-by: Abhinav Khattar <[email protected]>

* update commit

Signed-off-by: Abhinav Khattar <[email protected]>

---------

Signed-off-by: Abhinav Khattar <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Use cfg attribute in bert (#7394)

* use cfg attribute instead of arg

Signed-off-by: Maanu Grover <[email protected]>

* use torch_dtype in place of cfg.precision

Signed-off-by: Maanu Grover <[email protected]>

* move precision copy before super constructor

Signed-off-by: Maanu Grover <[email protected]>

* use trainer arg

Signed-off-by: Maanu Grover <[email protected]>

---------

Signed-off-by: Maanu Grover <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Add support for bias conversion in Swiglu models (#7386)

* Add support for bias conversion in Swiglu models

Signed-off-by: smajumdar <[email protected]>

* Add support for auto extracting tokenizer model

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add support for auto extracting tokenizer model

Signed-off-by: smajumdar <[email protected]>

* Fix issue with missing tokenizer

Signed-off-by: smajumdar <[email protected]>

* Refactor

Signed-off-by: smajumdar <[email protected]>

* Refactor

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Update save_to and restore_from for dist checkpointing (#7343)

* add dist ckpt to save to, in progress

Signed-off-by: eharper <[email protected]>

* move dist ckpt

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* clean up

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update restore from, need to figure out how to initialize distributed

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* launch distrib if needed when restoring dist ckpt

Signed-off-by: eharper <[email protected]>

* when using mcore we can change tp pp on the fly

Signed-off-by: eharper <[email protected]>

* add load_from_checkpoint support for dist ckpt

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update llama convert script to save dist .nemo

Signed-off-by: eharper <[email protected]>

* fix load dist ckpt

Signed-off-by: jasonwan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* setup TE TP groups if needed

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* setup te tp groups if needed

Signed-off-by: eharper <[email protected]>

* remove import

Signed-off-by: eharper <[email protected]>

---------

Signed-off-by: eharper <[email protected]>
Signed-off-by: jasonwan <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: jasonwan <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* fix forward for with mcore=false (#7403)

Signed-off-by: Jimmy Zhang <[email protected]>
Co-authored-by: Jimmy Zhang <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix logging to remove 's/it' from progress bar in Megatron models and add train_step_timing (#7374)

* Add CustomProgressBar class to exp_manager and trainer callbacks

Signed-off-by: Abhishree <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix the progress bar to reflect total microbatch cnt

Signed-off-by: Abhishree <[email protected]>

* Modify CustomProgressBar class

1) Modify CustomProgressBar class to update progress bar per global_step instead of per microbatch
2) Add the callback to other megatron training/finetuning files that are not using MegatronTrainerBuilder

Signed-off-by: Abhishree <[email protected]>

* Add CustomProgressBar callback to tuning files

Signed-off-by: Abhishree <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Set Activation Checkpointing Defaults (#7404)

* Set Activation Checkpointing Defaults

Signed-off-by: Abhinav Khattar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* check for None

Signed-off-by: Abhinav Khattar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* make loss mask default to false (#7407)

Signed-off-by: eharper <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Add dummy userbuffer config files (#7408)

Signed-off-by: Sangkug Lym <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* add missing ubconf files (#7412)

Signed-off-by: Abhinav Khattar <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* New tutorial on Speech Data Explorer (#7405)

* Added Google Colab based tutorial on Speech Data Explorer

Signed-off-by: George Zelenfroynd <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Update ptl training ckpt conversion script to work with dist ckpt (#7416)

* update ptl convert script

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* don't break legacy

Signed-off-by: eharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: eharper <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Allow disabling sanity checking when num_sanity_val_steps=0 (#7413)

* Allow disabling sanity checking when num_sanity_val_steps=0

Signed-off-by: Abhishree <[email protected]>

* Update num_sanity_val_steps to be a multiple of num_microbatches

Signed-off-by: Abhishree Thittenamane <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Abhishree <[email protected]>
Signed-off-by: Abhishree Thittenamane <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Add comprehensive error messages (#7261)

Signed-off-by: Anton Peganov <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* check NEMO_PATH (#7418)

Signed-off-by: Nikolay Karpov <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* layer selection for ia3 (#7417)

* layer selection for ia3

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: arendu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Fix missing pip package 'einops' (#7397)

Signed-off-by: Robin Dong <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix failure of pyaudio in Google Colab (#7396)

Signed-off-by: Robin Dong <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Update README.md: output_path --> output_manifest_filepath (#7442)

Signed-off-by: Samuele Cornell <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Add rope dynamic linear scaling (#7437)

* Add dynamic linear scaling

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

---------

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Yang Zhang <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix None dataloader issue in PTL2.0 (#7455)

* Fix None dataloader issue in PTL2.0

Signed-off-by: KunalDhawan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updating values of self._validation_dl and self._test_dl as well

Signed-off-by: KunalDhawan <[email protected]>

* updating values of self._validation_dl and self._test_dl as well

Signed-off-by: KunalDhawan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: KunalDhawan <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* [ASR] Confidence measure -> method renames (#7434)

* measure -> method

Signed-off-by: Aleksandr Laptev <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Aleksandr Laptev <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Add steps for document of getting dataset 'SF Bilingual Speech' (#7378)

* Add steps for document of getting dataset 'SF Bilingual Speech'

Signed-off-by: Robin Dong <[email protected]>

* Update datasets.rst

added a link from a tutorial demonstrating detailed data prep steps.

Signed-off-by: Xuesong Yang <[email protected]>

---------

Signed-off-by: Robin Dong <[email protected]>
Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* RNN-T confidence and alignment bugfix (#7381)

* new frame_confidence and alignments lists are now always created after the while loop

Signed-off-by: Aleksandr Laptev <[email protected]>

* tests added

Signed-off-by: Aleksandr Laptev <[email protected]>

---------

Signed-off-by: Aleksandr Laptev <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix resume from checkpoint in exp_manager (#7424) (#7426)

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix checking of cuda/cpu device for inputs of Decoder (#7444)

* Fix checking of cuda/cpu device for inputs of Decoder

Signed-off-by: Robin Dong <[email protected]>

* Update tacotron2.py

Signed-off-by: Jason <[email protected]>

---------

Signed-off-by: Robin Dong <[email protected]>
Signed-off-by: Jason <[email protected]>
Co-authored-by: Jason <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix failure of ljspeech's get_data.py (#7430)

* Fix failure of ljspeech's get_data.py

Signed-off-by: Robin Dong <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Robin Dong <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* [TTS] Fix audio codec type checks (#7373)

* [TTS] Fix audio codec type checks

Signed-off-by: Ryan <[email protected]>

* [TTS] Fix audio codec tests

Signed-off-by: Ryan <[email protected]>

---------

Signed-off-by: Ryan <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [TTS] Add dataset to path of logged artifacts (#7462)

* [TTS] Add dataset to path of logged artifacts

Signed-off-by: Ryan <[email protected]>

* [TTS] Revert axis name back to Audio Frames

Signed-off-by: Ryan <[email protected]>

---------

Signed-off-by: Ryan <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix sft dataset truncation (#7464)

* Add fix

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

---------

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Automatic Lip Reading Recognition (ALR) - ASR/CV (Visual ASR) (#7330)

* striding_conv1d_k5 and dw_striding_conv1d_k5 subsampling

Signed-off-by: mburchi <[email protected]>

* transpose conv1d inputs

Signed-off-by: mburchi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: mburchi <[email protected]>

* Update subsampling.py

change striding_conv1d_k5 to striding_conv1d

Signed-off-by: Maxime Burchi <[email protected]>

* cv branch

Signed-off-by: mburchi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* video manifest

Signed-off-by: mburchi <[email protected]>

* add collection classes

Signed-off-by: mburchi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add test_step_outputs

Signed-off-by: mburchi <[email protected]>

* correct manifest bug when having only audio or only videos

Signed-off-by: mburchi <[email protected]>

* correct manifest bug when having only audio or only videos

Signed-off-by: mburchi <[email protected]>

* clean references

Signed-off-by: mburchi <[email protected]>

* freeze unfreeze transcribe cv models

Signed-off-by: mburchi <[email protected]>

* correct manifest get_full_path bug

Signed-off-by: mburchi <[email protected]>

* update for PR

Signed-off-by: mburchi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* guard torchvision

Signed-off-by: mburchi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update nemo/collections/cv/data/video_to_text_dataset.py

Co-authored-by: Igor Gitman <[email protected]>
Signed-off-by: Maxime Burchi <[email protected]>

* _video_speech_collate_fn in cv/data/video_to_text.py

Signed-off-by: mburchi <[email protected]>

* add self.out = None to asr subsampling

Signed-off-by: mburchi <[email protected]>

* Update nemo/collections/cv/data/video_to_text_dataset.py

Co-authored-by: Igor Gitman <[email protected]>
Signed-off-by: Maxime Burchi <[email protected]>

* cv -> multimodal/speech_cv branch

Signed-off-by: mburchi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: mburchi <[email protected]>
Signed-off-by: Maxime Burchi <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Igor Gitman <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* HF StarCoder to NeMo conversion script (#7421)

* Script to convert HF StarCoder checkpoint to NeMo

Signed-off-by: Jan Lasek <[email protected]>

* StarCoder conversion test

Signed-off-by: Jan Lasek <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Jan Lasek <[email protected]>

* Fix test

Signed-off-by: Jan Lasek <[email protected]>

* Catch up with save_to changes

Signed-off-by: Jan Lasek <[email protected]>

* Don't abbreviate args for clarity

Signed-off-by: Jan Lasek <[email protected]>

* Configurable precision: BF16 vs FP32

Signed-off-by: Jan Lasek <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Jan Lasek <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* fix bug when loading dist ckpt in peft (#7452)

Signed-off-by: Hongbin Liu <[email protected]>
Co-authored-by: Hongbin Liu <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix adding positional embeddings in-place in transformer module (#7440)

Signed-off-by: Tamerlan Tabolov <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix (#7478)

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* add sleep (#7498) (#7499)

* add sleep

* add sleep onto config instead

* add comment

---------

Signed-off-by: Gerald Shen <[email protected]>
Co-authored-by: Gerald Shen <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix exp manager check for sleep (#7503) (#7504)

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* bugfix: trainer.accelerator=auto from None. (#7492) (#7493)

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [doc] fix broken link (#7481)

Signed-off-by: Stas Bekman <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [TTS] Read audio as int32 to avoid flac read errors (#7477)

* [TTS] Read audio as int32 to avoid flac read errors

Signed-off-by: Ryan <[email protected]>

* [TTS] Add comment about read failures

Signed-off-by: Ryan <[email protected]>

---------

Signed-off-by: Ryan <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Add dataset 'AISHELL-3' from OpenSLR for training mandarin TTS (#7409)

* Add dataset 'AISHELL-3' from OpenSLR for training mandarin TTS
* Train 'AISHELL-3' dataset with multi-speakers

Signed-off-by: Robin Dong <[email protected]>

* Update get_data.py

update copyright header

Signed-off-by: Xuesong Yang <[email protected]>

* Update get_data.py

added a disclaimer

Signed-off-by: Xuesong Yang <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add new configuration file for AISHELL3 with multispeaker of fastpitch

Signed-off-by: Robin Dong <[email protected]>

---------

Signed-off-by: Robin Dong <[email protected]>
Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Xuesong Yang <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* dllogger - log on rank 0 only (#7513)

Signed-off-by: Stas Bekman <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix TTS FastPitch tutorial (#7494) (#7516)

* Fix

---------

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix get_dist() tensor dimension (#7506) (#7515)

Signed-off-by: Jocelyn Huang <[email protected]>
Co-authored-by: Jocelyn <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* bugfix: specify trainer.strategy=auto when devices=1 (#7509) (#7512)

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* fix (#7511)

Signed-off-by: Abhinav Khattar <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [TTS] Fix FastPitch data prep tutorial (#7524)

Signed-off-by: Ryan <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* add italian tokenization (#7486)

* add italian tokenization

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add more ipa lexicon it

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix error deletion

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* add test

Signed-off-by: GiacomoLeoneMaria <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: GiacomoLeoneMaria <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Replace None strategy with auto in tutorial notebooks (#7521) (#7527)

Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* unpin setuptools (#7534) (#7535)

Signed-off-by: fayejf <[email protected]>
Co-authored-by: fayejf <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Update per.py

- if __name__ == "__main__" removed (now metric can be imported);
- removed excessive classes (like "Sample" and "Statistics");
- transition from pandas df to dict of dicts;
- removed unnecessary "return";
- notation fixing;
- reduced calculation time

Signed-off-by: Sasha Meister <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Sasha Meister <[email protected]>

* Create punctuation_rates.py

Signed-off-by: Sasha Meister <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Sasha Meister <[email protected]>

* Format fixing

Signed-off-by: Sasha Meister <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* added nemo.logging, header, docstrings, how to use

Signed-off-by: Sasha Meister <[email protected]>

* Added asserions to rate_punctuation.py

Signed-off-by: Sasha Meister <[email protected]>

* fix typo

Signed-off-by: Sasha Meister <[email protected]>

* added function for import and call, docstrings

Signed-off-by: Sasha Meister <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Sasha Meister <[email protected]>

* remove auto generated examples (#7510)

* explicitly remove autogenerated examples for data parallel evaluation

Signed-off-by: arendu <[email protected]>

* mark autogenrated and remove it for test

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: arendu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Sasha Meister <[email protected]>

* Add the `strategy` argument to `MegatronGPTModel.generate()` (#7264)

It is passed as an explicit argument rather than through
`**strategy_args` so as to ensure someone cannot accidentally pass other
arguments that would end up being ignored.

It is a keyword-only argument to ensure that if in the future we want to
update the signature to `**strategy_args`, we can do it without breaking
code.

Signed-off-by: Olivier Delalleau <[email protected]>
Signed-off-by: Sasha Meister <[email protected]>

* Fix PTL2.0 related ASR bugs in r1.21.0: Val metrics logging, None dataloader issue (#7531) (#7533)

* fix none dataloader issue ptl2

* ptl2.0 logging fixes for rnnt_models

---------

Signed-off-by: KunalDhawan <[email protected]>
Co-authored-by: Kunal Dhawan <[email protected]>
Co-authored-by: Nithi…
  • Loading branch information
Show file tree
Hide file tree
Showing 56 changed files with 4,130 additions and 53 deletions.
273 changes: 272 additions & 1 deletion Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,256 @@ pipeline {
sh 'CUDA_VISIBLE_DEVICES="" NEMO_NUMBA_MINVER=0.53 pytest -m "not pleasefixme" --cpu --with_downloads --relax_numba_compat'
}
}
//
// stage('L2: Multimodal Imagen Train') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/multimodal/imagen_train"
// sh "pip install webdataset==0.2.48"
// sh "python examples/multimodal/text_to_image/imagen/imagen_training.py \
// trainer.precision=16 \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=1 \
// model.global_batch_size=1 \
// model.data.synthetic_data=True \
// exp_manager.exp_dir=/home/TestData/multimodal/imagen_train \
// model.inductor=False \
// model.unet.flash_attention=False \
// "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/multimodal/imagen_train"
// }
// }
//
// stage('L2: Multimodal Stable Diffusion Train') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/multimodal/stable_diffusion_train"
// sh "pip install webdataset==0.2.48"
// sh "python examples/multimodal/text_to_image/stable_diffusion/sd_train.py \
// trainer.precision=16 \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=1 \
// model.global_batch_size=1 \
// model.data.synthetic_data=True \
// exp_manager.exp_dir=/home/TestData/multimodal/stable_diffusion_train \
// model.inductor=False \
// model.cond_stage_config._target_=nemo.collections.multimodal.modules.stable_diffusion.encoders.modules.FrozenCLIPEmbedder \
// ++model.cond_stage_config.version=openai/clip-vit-large-patch14 \
// ++model.cond_stage_config.max_length=77 \
// ~model.cond_stage_config.restore_from_path \
// ~model.cond_stage_config.freeze \
// ~model.cond_stage_config.layer \
// model.unet_config.from_pretrained=null \
// model.first_stage_config.from_pretrained=null \
// model.unet_config.use_flash_attention=False \
// "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/multimodal/stable_diffusion_train"
// }
// }
// stage('L2: Multimodal ControlNet Train') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/multimodal/controlnet_train"
// sh "pip install webdataset==0.2.48"
// sh "python examples/multimodal/text_to_image/controlnet/controlnet_train.py \
// trainer.precision=16 \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=1 \
// model.global_batch_size=1 \
// model.data.synthetic_data=True \
// exp_manager.exp_dir=/home/TestData/multimodal/controlnet_train \
// model.inductor=False \
// model.image_logger.max_images=0 \
// model.control_stage_config.params.from_pretrained_unet=null \
// model.unet_config.from_pretrained=null \
// model.first_stage_config.from_pretrained=null \
// model.unet_config.use_flash_attention=False \
// "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/multimodal/controlnet_train"
// }
// }
// stage('L2: Multimodal DreamBooth Train') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/multimodal/dreambooth_train"
// sh "pip install webdataset==0.2.48"
// sh "python examples/multimodal/text_to_image/dreambooth/dreambooth.py \
// trainer.precision=16 \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=1 \
// model.global_batch_size=1 \
// exp_manager.exp_dir=/home/TestData/multimodal/dreambooth_train \
// model.inductor=False \
// model.cond_stage_config._target_=nemo.collections.multimodal.modules.stable_diffusion.encoders.modules.FrozenCLIPEmbedder \
// ++model.cond_stage_config.version=openai/clip-vit-large-patch14 \
// ++model.cond_stage_config.max_length=77 \
// ~model.cond_stage_config.restore_from_path \
// ~model.cond_stage_config.freeze \
// ~model.cond_stage_config.layer \
// model.unet_config.from_pretrained=null \
// model.first_stage_config.from_pretrained=null \
// model.data.instance_dir=/home/TestData/multimodal/tiny-dreambooth \
// model.unet_config.use_flash_attention=False \
// "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/multimodal/dreambooth_train"
// }
// }
// stage('L2: Vision ViT Pretrain TP=1') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/vision/vit_pretrain_tp1"
// sh "pip install webdataset==0.2.48"
// sh "python examples/vision/vision_transformer/megatron_vit_classification_pretrain.py \
// trainer.precision=16 \
// model.megatron_amp_O2=False \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// trainer.val_check_interval=5 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=2 \
// model.global_batch_size=4 \
// model.tensor_model_parallel_size=1 \
// model.pipeline_model_parallel_size=1 \
// model.data.num_workers=0 \
// exp_manager.create_checkpoint_callback=False \
// model.data.data_path=[/home/TestData/multimodal/tiny-imagenet/train,/home/TestData/multimodal/tiny-imagenet/val] \
// exp_manager.exp_dir=/home/TestData/vision/vit_pretrain_tp1 "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/vision/vit_pretrain_tp1"
// }
// }
//
// stage('L2: Multimodal CLIP Pretrain TP=1') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/multimodal/clip_pretrain_tp1"
// sh "pip install webdataset==0.2.48"
// sh "python examples/multimodal/vision_language_foundation/clip/megatron_clip_pretrain.py \
// trainer.precision=16 \
// model.megatron_amp_O2=False \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// trainer.val_check_interval=10 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=1 \
// model.global_batch_size=1 \
// model.tensor_model_parallel_size=1 \
// model.pipeline_model_parallel_size=1 \
// exp_manager.create_checkpoint_callback=False \
// model.data.num_workers=0 \
// model.vision.num_layers=2 \
// model.text.num_layers=2 \
// model.vision.patch_dim=32 \
// model.vision.encoder_seq_length=49 \
// model.vision.class_token_length=7 \
// model.data.train.dataset_path=[/home/TestData/multimodal/tiny-clip/00000.tar] \
// model.data.validation.dataset_path=[/home/TestData/multimodal/tiny-clip/00000.tar] \
// model.data.webdataset.local_root_path=/ \
// exp_manager.exp_dir=/home/TestData/multimodal/clip_pretrain_tp1 "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/multimodal/clip_pretrain_tp1"
// }
// }
//
// stage('L2: Multimodal NeVA Pretrain TP=1') {
// when {
// anyOf {
// branch 'main'
// changeRequest target: 'main'
// }
// }
// failFast true
// steps {
// sh "rm -rf /home/TestData/multimodal/neva_pretrain_tp1"
// sh "pip install webdataset==0.2.48"
// sh "python examples/multimodal/multimodal_llm/neva/neva_pretrain.py \
// trainer.precision=bf16 \
// model.megatron_amp_O2=False \
// trainer.num_nodes=1 \
// trainer.devices=1 \
// trainer.val_check_interval=10 \
// trainer.limit_val_batches=5 \
// trainer.log_every_n_steps=1 \
// ++exp_manager.max_time_per_run=00:00:03:00 \
// trainer.max_steps=20 \
// model.micro_batch_size=2 \
// model.global_batch_size=4 \
// model.tensor_model_parallel_size=1 \
// model.pipeline_model_parallel_size=1 \
// exp_manager.create_checkpoint_callback=False \
// model.data.data_path=/home/TestData/multimodal/tiny-neva/dummy.json \
// model.data.image_folder=/home/TestData/multimodal/tiny-neva/images \
// model.tokenizer.library=sentencepiece \
// model.tokenizer.model=/home/TestData/multimodal/tiny-neva/tokenizer_add_special.model \
// model.num_layers=2 \
// model.hidden_size=5120 \
// model.ffn_hidden_size=13824 \
// model.num_attention_heads=40 \
// model.normalization=rmsnorm \
// model.data.num_workers=0 \
// model.data.conv_template=llama_2 \
// model.mm_cfg.vision_encoder.from_pretrained='openai/clip-vit-large-patch14' \
// model.mm_cfg.llm.from_pretrained=null \
// model.use_flash_attention=false \
// exp_manager.exp_dir=/home/TestData/multimodal/neva_pretrain_tp1 "
// sh "pip install 'webdataset>=0.1.48,<=0.1.62'"
// sh "rm -rf /home/TestData/multimodal/neva_pretrain_tp1"
// }
// }

// TODO: this requires TE >= v0.11 which is not available in 23.06.
// please uncomment this test once mcore CI is ready.
Expand Down Expand Up @@ -4826,6 +5076,7 @@ assert_frame_equal(training_curve, gt_curve, rtol=1e-3, atol=1e-3)"'''
}
}
}

stage('L2: TTS Fast dev runs 1') {
when {
anyOf {
Expand Down Expand Up @@ -4971,7 +5222,27 @@ assert_frame_equal(training_curve, gt_curve, rtol=1e-3, atol=1e-3)"'''
}
}
}

stage('L2: NeRF') {
when {
anyOf {
branch 'r1.21.0'
changeRequest target: 'r1.21.0'
}
}
parallel {
stage('DreamFusion') {
steps {
sh 'python examples/multimodal/text_to_image/nerf/main.py \
trainer.num_nodes=1 \
trainer.devices="[0]" \
trainer.max_steps=1000 \
model.prompt="a DSLR photo of a delicious hamburger" \
exp_manager.exp_dir=examples/multimodal/text_to_image/nerf/dreamfusion_results'
sh 'rm -rf examples/multimodal/text_to_image/nerf/dreamfusion_results'
}
}
}
}
stage('L??: Speech Checkpoints tests') {
when {
anyOf {
Expand Down
5 changes: 5 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,9 @@
'ipadic',
'psutil',
'regex',
'PIL',
'boto3',
'taming',
]

_skipped_autodoc_mock_imports = ['wrapt', 'numpy']
Expand Down Expand Up @@ -125,6 +128,8 @@
'tts/tts_all.bib',
'text_processing/text_processing_all.bib',
'core/adapters/adapter_bib.bib',
'multimodal/mm_all.bib',
'vision/vision_all.bib',
]

intersphinx_mapping = {
Expand Down
19 changes: 18 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ NVIDIA NeMo User Guide
nlp/api
nlp/megatron_onnx_export
nlp/models


.. toctree::
:maxdepth: 1
Expand All @@ -71,6 +71,23 @@ NVIDIA NeMo User Guide
text_processing/g2p/g2p
common/intro

.. toctree::
:maxdepth: 3
:caption: Multimodal (MM)
:name: Multimodal

multimodal/mllm/intro
multimodal/vlm/intro
multimodal/text2img/intro
multimodal/nerf/intro
multimodal/api

.. toctree::
:maxdepth: 2
:caption: Vision
:name: vision

vision/intro

.. toctree::
:maxdepth: 3
Expand Down
Loading

0 comments on commit fe358f4

Please sign in to comment.