diff --git a/examples/multimodal/multimodal_llm/neva/conf/llava_config.yaml b/examples/multimodal/multimodal_llm/neva/conf/llava_config.yaml index 3ec90b2d1b53..d8a31fa19ca9 100644 --- a/examples/multimodal/multimodal_llm/neva/conf/llava_config.yaml +++ b/examples/multimodal/multimodal_llm/neva/conf/llava_config.yaml @@ -71,10 +71,10 @@ model: freeze: False model_type: llama_2 # Only support nvgpt or llama_2 vision_encoder: - from_pretrained: "openai/clip-vit-large-patch14" # path or name + from_pretrained: "openai/clip-vit-large-patch14-336" # path or name from_hf: True patch_dim: 14 - crop_size: [224, 224] + crop_size: [336, 336] hidden_size: 1024 # could be found from model but tricky in code vision_select_layer: -2 # default to the last layer class_token_length: 1 diff --git a/scripts/checkpoint_converters/convert_llava_hf_to_nemo.py b/scripts/checkpoint_converters/convert_llava_hf_to_nemo.py index d91899348e8c..85f65ca05ecf 100644 --- a/scripts/checkpoint_converters/convert_llava_hf_to_nemo.py +++ b/scripts/checkpoint_converters/convert_llava_hf_to_nemo.py @@ -292,7 +292,7 @@ def convert(args): batch_dict = hf_tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') batch_dict_cuda = {k: v.cuda() for k, v in batch_dict.items()} hf_model = hf_model.cuda().eval() - model = model.eval() + model = model.cuda().eval() hf_outputs = hf_model(**batch_dict_cuda, output_hidden_states=True) ids = batch_dict_cuda['input_ids'] @@ -307,7 +307,7 @@ def convert(args): attn_mask, _, pos_ids = attn_mask_and_pos_ids outputs = model( - tokens=tokens, text_position_ids=pos_ids.cuda(), attention_mask=attn_mask.cuda(), labels=None + tokens=tokens.cuda(), text_position_ids=pos_ids.cuda(), attention_mask=attn_mask.cuda(), labels=None ) hf_next_token = hf_outputs.logits[0, -1].argmax() diff --git a/tutorials/multimodal/NeVA Tutorial.ipynb b/tutorials/multimodal/NeVA Tutorial.ipynb index 5e2607dcd801..7f4d3cf79779 100644 --- a/tutorials/multimodal/NeVA Tutorial.ipynb +++ b/tutorials/multimodal/NeVA Tutorial.ipynb @@ -2,8 +2,13 @@ "cells": [ { "cell_type": "markdown", - "id": "a2225742c5996304", - "metadata": {}, + "id": "672caa4e", + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, "source": [ "# NeVA Training / Inference Tutorial\n", "\n", @@ -20,28 +25,19 @@ "\n", "This notebook illustrates how to train and perform inference using NeVA with the NeMo Toolkit. NeVA originates from [LLaVA](https://github.com/haotian-liu/LLaVA) (Large Language and Vision Assistant) and is a powerful multimodal image-text instruction tuned model optimized within the NeMo Framework. \n", "\n", - "\n", "This tutorial will guide you through the following topics:\n", - "1. Training a NeVA model\n", - "2. Performing inference with the trained model\n", + "1. Prepare pre-requisites for NeVA training\n", + "2. Training a NeVA model\n", + "3. Performing inference with the trained model\n", "\n", "## Datasets\n", "\n", - "After downloading all below datasets for pretraining and instruction tuning, your dataset directory should look something similar to:\n", + "Please refer to [NeMo User Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/multimodalmodels/multimodallanguagemodel/neva/dataprep.html#prepare-pretraining-and-fine-tuning-datasets) for preparing NeVA dataset for pretrain and fine-tuning.\n", "\n", - "```\n", - "LLaVA-Pretrain-LCS-558K\n", - "├── blip_laion_cc_sbu_558k.json\n", - "├── images\n", - "LLaVA-Instruct-mixture\n", - "├── llava_v1_5_mix665k.json\n", - "└── images\n", - " └── ...\n", - "```\n", "\n", "### Pre-Training Dataset\n", "\n", - "The pre-training dataset is open-sourced from the LLaVA implementation and can be downloaded [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain). The dataset consists of a 558K subset of the LAION-CC-SBU dataset with BLIP captions. \n", + "The pre-training dataset is open-sourced from the LLaVA implementation and can be downloaded [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain). The dataset consists of a 558K subset of the LAION-CC-SBU dataset with BLIP captions.\n", "\n", "The associated images for pretraining can be downloaded via HuggingFace [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/images.zip).\n", "\n", @@ -66,14 +62,73 @@ " └── VG_100K_2\n", "```\n", "\n", - "## Training\n", + "After downloading all below datasets for pretraining and instruction tuning, please put data folder at `/workspace/datasets`. Your dataset directory should look something similar to:\n", "\n", + "```\n", + "LLaVA-Pretrain-LCS-558K\n", + "├── blip_laion_cc_sbu_558k.json\n", + "├── images\n", + "LLaVA-Instruct-mixture\n", + "├── llava_v1_5_mix665k.json\n", + "└── images\n", + " └── ...\n", + "```\n", + "\n", + "## Setting up Checkpoint and Tokenizer\n", + "\n", + "In this notebook, we first need to convert the Vicuna 1.5 checkpoint into the .nemo format. Meanwhile, special tokens must be incorporated into the tokenizer for NeVA training. After downloading language models from Hugging Face, ensure you also fetch the corresponding tokenizer model. Using the 7B-chat model as a reference." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "26d34982-61e5-4dd0-9fd9-ff25261bc164", + "metadata": {}, + "outputs": [], + "source": [ + "! mkdir -p /workspace/checkpoints\n", + "\n", + "# Download vicuna checkpoint from HF\n", + "! git clone https://huggingface.co/lmsys/vicuna-7b-v1.5 /workspace/checkpoints/vicuna-7b-v1.5\n", + "\n", + "# Convert checkpoint\n", + "! python /opt/NeMo/scripts/checkpoint_converters/convert_llama_hf_to_nemo.py \\\n", + " --input_name_or_path /workspace/checkpoints/vicuna-7b-v1.5 \\\n", + " --output_path /workspace/checkpoints/vicuna-7b-v1.5.nemo\n", + "\n", + "# Prepare tokenizer\n", + "! cd /opt && git clone https://github.com/google/sentencepiece.git && \\\n", + " cd sentencepiece && \\\n", + " mkdir build && \\\n", + " cd build && \\\n", + " cmake .. && \\\n", + " make && \\\n", + " make install && \\\n", + " ldconfig && \\\n", + "cd /opt/sentencepiece/src/ && protoc --python_out=/opt/NeMo/scripts/tokenizers/ sentencepiece_model.proto\n", + "\n", + "! python /opt/NeMo/scripts/tokenizers/add_special_tokens_to_sentencepiece.py \\\n", + "--input_file /workspace/checkpoints/vicuna-7b-v1.5/tokenizer.model \\\n", + "--output_file /workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model \\\n", + "--is_userdefined \\\n", + "--tokens \"\" \"\" \"\" \"\" \\\n", + " \"\" \"\" \"\" \"\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b620ef9b-e40e-45a0-8858-63f833cdc3dc", + "metadata": {}, + "outputs": [], + "source": [ + "## Training\n", "\n", "### Feature Alignment Pre-Training\n", "\n", "We provide a set of scripts for pre-training and fine-tuning which can be kicked off with CLI flags defining specified arguments. \n", "\n", - "An example of a pre-training script execution:" + "An example of a pre-training script execution (note the scripts will only perform 100 steps with a small micro batch size, this is not a full training):" ] }, { @@ -92,45 +147,44 @@ " trainer.precision=bf16 \\\n", " trainer.num_nodes=1 \\\n", " trainer.devices=4 \\\n", - " trainer.val_check_interval=1000 \\\n", + " trainer.val_check_interval=50 \\\n", " trainer.limit_val_batches=5 \\\n", " trainer.log_every_n_steps=1 \\\n", - " trainer.max_steps=1000 \\\n", + " trainer.max_steps=100 \\\n", " model.megatron_amp_O2=True \\\n", " model.micro_batch_size=1 \\\n", - " model.global_batch_size=2 \\\n", - " model.tensor_model_parallel_size=4 \\\n", + " model.global_batch_size=4 \\\n", + " model.tensor_model_parallel_size=1 \\\n", " model.pipeline_model_parallel_size=1 \\\n", " model.mcore_gpt=True \\\n", " model.transformer_engine=True \\\n", - " model.data.data_path=/path/to/datasets/LLaVA-Pretrain-LCS-558K/blip_laion_cc_sbu_558k.json \\\n", - " model.data.image_folder=/path/to/dataset/LLaVA-Pretrain-LCS-558K/images \\\n", + " model.data.data_path=/workspace/datasets/LLaVA-Pretrain-LCS-558K/blip_laion_cc_sbu_558k.json \\\n", + " model.data.image_folder=/workspace/datasets/LLaVA-Pretrain-LCS-558K/images \\\n", " model.tokenizer.library=sentencepiece \\\n", - " model.tokenizer.model=/path/to/tokenizer/model \\\n", + " model.tokenizer.model=/workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model \\\n", " model.encoder_seq_length=4096 \\\n", " model.num_layers=32 \\\n", " model.hidden_size=4096 \\\n", - " model.ffn_hidden_size=16384 \\\n", + " model.ffn_hidden_size=11008 \\\n", " model.num_attention_heads=32 \\\n", - " model.normalization=layernorm1p \\\n", + " model.normalization=rmsnorm \\\n", " model.do_layer_norm_weight_decay=False \\\n", " model.apply_query_key_layer_scaling=True \\\n", - " model.activation=squared-relu \\\n", + " model.bias=False \\\n", + " model.activation=fast-swiglu \\\n", " model.headscale=False \\\n", " model.position_embedding_type=rope \\\n", - " model.rotary_percentage=0.5 \\\n", + " model.rotary_percentage=1.0 \\\n", " model.num_query_groups=null \\\n", " model.data.num_workers=0 \\\n", - " model.mm_cfg.llm.from_pretrained=/path/to/checkpoint \\\n", - " model.mm_cfg.llm.model_type=nvgpt \\\n", - " model.data.conv_template=nvgpt \\\n", + " model.mm_cfg.llm.from_pretrained=/workspace/checkpoints/vicuna-7b-v1.5.nemo \\\n", + " model.mm_cfg.llm.model_type=v1 \\\n", + " model.data.conv_template=v1 \\\n", " model.mm_cfg.vision_encoder.from_pretrained='openai/clip-vit-large-patch14' \\\n", " model.mm_cfg.vision_encoder.from_hf=True \\\n", - " model.data.image_token_len=256 \\\n", " model.optim.name=\"fused_adam\" \\\n", " exp_manager.create_checkpoint_callback=True \\\n", - " exp_manager.create_wandb_logger=False \\\n", - " exp_manager.wandb_logger_kwargs.project=neva_demo" + " exp_manager.create_wandb_logger=False" ] }, { @@ -144,9 +198,9 @@ "\n", "### Image-Language Pair Instruction Fine-Tuning\n", "\n", - "Fine-tuning can also be run from within the container via a similar command leveraging the `neva_finetune.py` script.\n", + "Fine-tuning can also be run from within the container via a similar command leveraging the `neva_finetune.py` script. We leverage the checkpoint saved from pretrain step to further finetune it, given by `model.restore_from_path=/workspace/nemo_experiments/nemo_neva/checkpoints/nemo_neva.nemo`.\n", "\n", - "An example of an image-text pair instruction tuning script execution:" + "An example of an image-text pair instruction tuning script execution (note the scripts will only perform 1000 steps with a small micro batch size, this is not a full training):" ] }, { @@ -164,42 +218,44 @@ "++cluster_type=BCP \\\n", " trainer.precision=bf16 \\\n", " trainer.num_nodes=1 \\\n", - " trainer.devices=1 \\\n", - " trainer.val_check_interval=100 \\\n", + " trainer.devices=4 \\\n", + " trainer.val_check_interval=50 \\\n", " trainer.limit_val_batches=50 \\\n", - " trainer.max_steps=4900 \\\n", + " trainer.max_steps=100 \\\n", + " model.restore_from_path=/workspace/nemo_experiments/nemo_neva/checkpoints/nemo_neva.nemo \\\n", " model.megatron_amp_O2=True \\\n", - " model.micro_batch_size=4 \\\n", - " model.global_batch_size=32 \\\n", - " model.tensor_model_parallel_size=1 \\\n", + " model.micro_batch_size=1 \\\n", + " model.global_batch_size=2 \\\n", + " model.tensor_model_parallel_size=4 \\\n", " model.pipeline_model_parallel_size=1 \\\n", " model.mcore_gpt=True \\\n", " model.transformer_engine=True \\\n", - " model.data.data_path=/path/to/dataset/LLaVA-Pretrain-LCS-558K/blip_laion_cc_sbu_558k.json \\\n", - " model.data.image_folder=/path/to/dataset/LLaVA-Pretrain-LCS-558K/images \\\n", - " model.tokenizer.library=megatron \\\n", - " model.tokenizer.model=/path/to/tokenizer \\\n", + " model.data.data_path=/workspace/datasets/LLaVA-Instruct-mixture/llava_v1_5_mix665k.json \\\n", + " model.data.image_folder=/workspace/datasets/LLaVA-Instruct-mixture/images \\\n", + " model.tokenizer.library=sentencepiece \\\n", + " model.tokenizer.model=/workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model \\\n", " model.encoder_seq_length=4096 \\\n", - " model.num_layers=24 \\\n", - " model.hidden_size=2048 \\\n", - " model.ffn_hidden_size=5440 \\\n", - " model.num_attention_heads=16 \\\n", - " model.normalization=layernorm1p \\\n", + " model.num_layers=32 \\\n", + " model.hidden_size=4096 \\\n", + " model.ffn_hidden_size=11008 \\\n", + " model.num_attention_heads=32 \\\n", + " model.normalization=rmsnorm \\\n", " model.do_layer_norm_weight_decay=False \\\n", " model.apply_query_key_layer_scaling=True \\\n", + " model.bias=False \\\n", " model.activation=fast-swiglu \\\n", " model.headscale=False \\\n", " model.position_embedding_type=rope \\\n", - " model.rotary_percentage=0.5 \\\n", + " model.rotary_percentage=1.0 \\\n", " model.num_query_groups=null \\\n", - " model.data.num_workers=8 \\\n", - " model.mm_cfg.llm.from_pretrained=/path/to/checkpoint \\\n", - " model.mm_cfg.llm.model_type=nvgpt \\\n", - " exp_manager.create_checkpoint_callback=True \\\n", - " model.data.conv_template=nvgpt \\\n", + " model.data.num_workers=0 \\\n", + " model.mm_cfg.llm.from_pretrained=/workspace/checkpoints/vicuna-7b-v1.5.nemo \\\n", + " model.mm_cfg.llm.model_type=v1 \\\n", + " model.data.conv_template=v1 \\\n", " model.mm_cfg.vision_encoder.from_pretrained='openai/clip-vit-large-patch14' \\\n", " model.mm_cfg.vision_encoder.from_hf=True \\\n", - " model.data.image_token_len=256 \\\n", + " exp_manager.create_checkpoint_callback=True \\\n", + " exp_manager.name=\"nemo_neva_finetune\" \\\n", " model.optim.name=\"fused_adam\"" ] }, @@ -212,38 +268,7 @@ "\n", "### From Pre-trained Checkpoints\n", "\n", - "If you would like to use NeVA for inference from pre-trained checkpoint via HuggingFace, you can convert from HuggingFace to `.nemo` first.\n", - "\n", - "First, download the model checkpoint from HuggingFace [here](https://huggingface.co/liuhaotian/llava-v1.5-7b). The tokenizer (stored as `tokenizer.model` within the pretrained checkpoint) must be modified with the following commands:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "0d30003f", - "metadata": { - "vscode": { - "languageId": "plaintext" - } - }, - "outputs": [], - "source": [ - "! cd /opt/sentencepiece/src/\n", - "! protoc --python_out=/opt/NeMo/scripts/tokenizers/ sentencepiece_model.proto\n", - "! python /opt/NeMo/scripts/tokenizers/add_special_tokens_to_sentencepiece.py \\\n", - "--input_file /path/to/tokenizer.model \\\n", - "--output_file /path/to/tokenizer_neva.model \\\n", - "--is_userdefined \\\n", - "--tokens \"\" \"\" \"\" \"\" \\\n", - " \"\" \"\" \"\" \"\"" - ] - }, - { - "cell_type": "markdown", - "id": "470c093b", - "metadata": {}, - "source": [ - "Finally, convert to `.nemo` via the provided script:" + "If you would like to use NeVA for inference from pre-trained checkpoint via HuggingFace, you can use the checkpoint from fine-tune step or convert from HuggingFace to `.nemo` first. Since we didn't finish full training in this tutorial with NeMo. We will instruct how you can convert a checkpoint from Hugging Face." ] }, { @@ -257,10 +282,10 @@ }, "outputs": [], "source": [ - "! python /opt/NeMo/examples/multimodal/mllm/neva/convert_hf_llava_to_neva.py \\\n", - "--in-file /path/to/llava-v1.5-7b \\\n", - "--out-file /path/to/llava-v1.5-7b.nemo \\\n", - "--tokenizer-model /path/to/tokenizer_neva.model" + "! python3 /opt/NeMo/scripts/checkpoint_converters/convert_llava_hf_to_nemo.py \\\n", + " --input_name_or_path llava-hf/llava-1.5-7b-hf \\\n", + " --output_path /workspace/checkpoints/llava-7b.nemo \\\n", + " --tokenizer_path /workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model" ] }, { @@ -288,57 +313,19 @@ }, "outputs": [], "source": [ + "! echo '{\"image\": \"RTX4080.png\", \"prompt\": \"\\nCan you describe this image?\"}' > sample.json\n", + "! mkdir images && wget https://assets.nvidia.partners/images/png/TUF_Gaming_GeForce_RTX_4080_SUPER_OC_edition_packaging_with_card__12419.png --output-document=images/RTX4080.png\n", "! torchrun --nproc_per_node=1 /opt/NeMo/examples/multimodal/multimodal_llm/neva/neva_evaluation.py \\\n", "tensor_model_parallel_size=1 \\\n", "pipeline_model_parallel_size=1 \\\n", - "neva_model_file=/path/to/checkpoint \\\n", + "neva_model_file=/workspace/checkpoints/llava-7b.nemo \\\n", "trainer.devices=1 \\\n", "trainer.precision=bf16 \\\n", - "prompt_file=/path/to/prompt/file \\\n", - "inference.media_base_path=/path/to/image \\\n", - "output_file=path/for/output/file/ \\\n", + "prompt_file=sample.json \\\n", + "inference.media_base_path=images \\\n", + "output_file=output.json \\\n", "inference.temperature=0.2 \\\n", - "inference.top_k=0 \\\n", - "inference.top_p=0.9 \\\n", - "inference.greedy=False \\\n", - "inference.add_BOS=False \\\n", - "inference.all_probs=False \\\n", - "inference.repetition_penalty=1.2 \\\n", - "inference.insert_media_token=null \\\n", - "inference.tokens_to_generate=256 \\\n", - "quantization.algorithm=awq \\\n", - "quantization.enable=False" - ] - }, - { - "cell_type": "markdown", - "id": "7d989385", - "metadata": {}, - "source": [ - "#### Running Inference via Launcher\n", - "\n", - "Inference can also be run via the NeMo Launcher, where parameters are specified in the inference config file rather than CLI arguments. To customize the default config provided in `conf/config.yaml` for NeVA inference, see below.\n", - "\n", - "##### Inference Config Setup\n", - "1. Modify `fw_inference` within `defaults` to use `neva/inference` \n", - "2. In `stages`, ensure that `fw_inference` is included\n", - "3. Within the `inference.yaml` default NeVA inference config file, ensure that the path to the `prompt` file, `neva_model_file`, and `media_base_path` within `inference` are specified.\n", - "\n", - "Once either the necessary checkpoints have been loaded or the training workflow is complete, inference can be executed within the launcher pipeline with the following command:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "68d434ff", - "metadata": { - "vscode": { - "languageId": "plaintext" - } - }, - "outputs": [], - "source": [ - "! python3 main.py" + "inference.tokens_to_generate=256" ] } ], @@ -358,7 +345,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.11.6" } }, "nbformat": 4,