Skip to content

Commit

Permalink
Minor fix (#11353)
Browse files Browse the repository at this point in the history
Signed-off-by: Gomathy Venkata Krishnan <[email protected]>
  • Loading branch information
gvenkatakris authored Nov 26, 2024
1 parent 82d9dd2 commit 080bcd7
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "8bc99d2f-9ac6-40c2-b072-12b6cb7b9aca",
"metadata": {},
"source": [
"### Step 3: Step 3: Prune the fine-tuned teacher model to create a student\n",
"### Step 3: Prune the fine-tuned teacher model to create a student\n",
"In the second method, we will width-prune. In width-pruning, we trim the neurons, attention heads, and embedding channels.\n",
"\n",
"Refer to the ``NOTE`` in the **_step-by-step instructions_** section of [introduction.ipynb](./introduction.ipynb) to decide which pruning techniques you would like to explore."
Expand Down
2 changes: 1 addition & 1 deletion tutorials/llm/llama-3/pruning-distillation/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Llama 3.1 Pruning and Distillation with NeMo Framework
Objectives
----------

This tutorial demonstrates how to perform depth-pruning, width-pruning, teacher fine-tuning, and distillation on **Llama 3.1 8B** using the `WikiText-103-v1 <https://huggingface.co/datasets/Salesforce/wikitext/viewer/wikitext-103-v1>_ dataset with the NeMo Framework. The WikiText-103-v1 <https://huggingface.co/datasets/Salesforce/wikitext/viewer/wikitext-103-v1>`_ language modeling dataset comprises over 100 million tokens extracted from verified Good and Featured articles on Wikipedia.
This tutorial demonstrates how to perform depth-pruning, width-pruning, teacher fine-tuning, and distillation on **Llama 3.1 8B** using the `WikiText-103-v1 <https://huggingface.co/datasets/Salesforce/wikitext/viewer/wikitext-103-v1>`_ dataset with the NeMo Framework. The `WikiText-103-v1 <https://huggingface.co/datasets/Salesforce/wikitext/viewer/wikitext-103-v1>`_ language modeling dataset comprises over 100 million tokens extracted from verified Good and Featured articles on Wikipedia.

For this demonstration, we will perform teacher correction by running a light fine-tuning procedure on the ``Meta LLama 3.1 8B`` teacher model to generate a fine-tuned teacher model, ``megatron_llama_ft.nemo``, needed for optimal distillation. This fine-tuned teacher model is then trimmed. There are two methods to prune a model: depth-pruning and width-pruning. We will explore both techniques, yielding ``4b_depth_pruned_model.nemo`` and ``4b_width_pruned_model.nemo``, respectively. These models will serve as starting points for distillation to create the final distilled 4B models.

Expand Down

0 comments on commit 080bcd7

Please sign in to comment.