forked from djliden/notebooks
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
adds start of t5 example and empty readme
- Loading branch information
djliden
committed
Nov 27, 2023
1 parent
94d0df6
commit 697f818
Showing
2 changed files
with
138 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,138 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"application/vnd.databricks.v1+cell": { | ||
"cellMetadata": { | ||
"byteLimit": 2048000, | ||
"rowLimit": 10000 | ||
}, | ||
"inputWidgets": {}, | ||
"nuid": "2c11dcf5-8a18-49c6-8047-f8a78fc70d74", | ||
"showTitle": false, | ||
"title": "" | ||
} | ||
}, | ||
"source": [ | ||
"# Introduction\n", | ||
"\n", | ||
"LLM fine-tuning almost always requires multiple GPUs to be useful or to be possible at all. But if you're relatively new to deep learning, or you've only trained models on single GPUs before, making the jump to distributed training on multiple GPUs and multiple nodes can be extremely challenging and more than a little frustrating.\n", | ||
"\n", | ||
"We're starting with a very small model [t5-small](https://huggingface.co/t5-small) for a few reasons:\n", | ||
"- Learning about model fine-tuning is a lot less frustrating if you start from a place of less complexity and are able to get results quickly!\n", | ||
"- When we get to the point of training larger models on distributed systems, we're going to spend a lot of time and energy on *how* to distribute the model, data, etc., across that system. Starting smaller lets us spend some time at the beginning focusing on the training metrics that directly relate to model performance rather than the complexity involved with distributed training. Eventually we will need both, but there's no reason to try to digest all of it all at once!\n", | ||
"- Starting small and then scaling up will give us a solid intuition of how, when, and why to use the various tools and techniques for training larger models or for using more compute resources to train models faster.\n", | ||
"\n", | ||
"## Fine-Tuning t5-small\n", | ||
"Our goals in this notebook are simple. We want to fine-tune the t5-small model and verify that its behavior has changed as a result of our fine-tuning.\n", | ||
"\n", | ||
"The [t5 (text-to-text transfer transformer) family of models](https://blog.research.google/2020/02/exploring-transfer-learning-with-t5.html) was developed by Google Research. It was presented as an advancement over BERT-style models which could output only a class label or a span of the input. t5 allows the same model, loss, and hyperparameters to be used for *any* nlp task. t5 differs from GPT models because it is an encoder-decoder model, while GPT models are decoder-only models.\n", | ||
"\n", | ||
"t5-small is a 60 million parameter model. A an oft-cited heuristic for model training is that you need GPU memory (VRAM) in Gigabytes greater than or equal to the number of parameters in billions times 16. So a 1 Billion parameter model would require approximately 16GB of VRAM for training. t5-small is a 0.06B parameter model and thus requires only around 0.96GB of VRAM for training. Again--we're starting *very small*.\n", | ||
"\n", | ||
"## A few things to keep in mind\n", | ||
"Check out the [Readme](README.md) if you haven't already, as it provides important context for this whole project. If you're looking for a set of absolute best practices for how to train particular models, this isn't the place to find them (though I will link them when I come across them, and will try to make improvements where I can, as long as they don't come at the cost of extra complexity!). The goal is to develop a high-level understanding and intuition on model training and fine-tuning, so you can fairly quickly get to something that *works* and then iterate to make it work *better*.\n", | ||
"\n", | ||
"## Compute used in this example\n", | ||
"I am using a g4dn.4xlarge AWS ec2 instance, which has a single T4 GPU with 16GB VRAM.\n", | ||
"\n", | ||
"# 1. Get the model and try some examples\n", | ||
"Before training the model, it helps to have some sense of its base behavior. Let's take a look. See appendix C of the [t5 paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for examples of how to format inputs for various tasks." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 0, | ||
"metadata": { | ||
"application/vnd.databricks.v1+cell": { | ||
"cellMetadata": { | ||
"byteLimit": 2048000, | ||
"rowLimit": 10000 | ||
}, | ||
"inputWidgets": {}, | ||
"nuid": "c54649ba-e0cb-4f64-a10b-dd509fc16976", | ||
"showTitle": false, | ||
"title": "" | ||
} | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"%pip install --upgrade transformers torch\n", | ||
"dbutils.library.restartPython()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 0, | ||
"metadata": { | ||
"application/vnd.databricks.v1+cell": { | ||
"cellMetadata": { | ||
"byteLimit": 2048000, | ||
"rowLimit": 10000 | ||
}, | ||
"inputWidgets": {}, | ||
"nuid": "37ae82d9-587e-4f75-9b30-41f253e0fbbe", | ||
"showTitle": false, | ||
"title": "" | ||
} | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\n", | ||
"import torch\n", | ||
"\n", | ||
"# Load model and tokenizer\n", | ||
"tokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\n", | ||
"model = AutoModelForSeq2SeqLM.from_pretrained(\"t5-small\")\n", | ||
"\n", | ||
"# Check if GPU is available and move the model to GPU\n", | ||
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", | ||
"model = model.to(device)\n", | ||
"\n", | ||
"# Sample text\n", | ||
"# input_text = \"Translate English to German: The house is wonderful.\"\n", | ||
"input_text = \"question: What is the deepspeed license? context: DeepSpeed is an open source deep learning optimization library for PyTorch. The library is designed to reduce computing power and memory use and to train large distributed models with better parallelism on existing computer hardware. DeepSpeed is optimized for low latency, high throughput training. It includes the Zero Redundancy Optimizer (ZeRO) for training models with 1 trillion or more parameters. Features include mixed precision training, single-GPU, multi-GPU, and multi-node training as well as custom model parallelism. The DeepSpeed source code is licensed under MIT License and available on GitHub.\"\n", | ||
"\n", | ||
"# Encode and generate response\n", | ||
"input_ids = tokenizer.encode(input_text, return_tensors=\"pt\").to(device)\n", | ||
"output_ids = model.generate(input_ids, max_new_tokens=20)[0]\n", | ||
"\n", | ||
"# Decode and print the output text\n", | ||
"output_text = tokenizer.decode(output_ids, skip_special_tokens=True)\n", | ||
"print(output_text)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 0, | ||
"metadata": { | ||
"application/vnd.databricks.v1+cell": { | ||
"cellMetadata": { | ||
"byteLimit": 2048000, | ||
"rowLimit": 10000 | ||
}, | ||
"inputWidgets": {}, | ||
"nuid": "c46abb2d-7ea5-49b8-9277-ece228ecb165", | ||
"showTitle": false, | ||
"title": "" | ||
} | ||
}, | ||
"outputs": [], | ||
"source": [] | ||
} | ||
], | ||
"metadata": { | ||
"application/vnd.databricks.v1+notebook": { | ||
"dashboards": [], | ||
"language": "python", | ||
"notebookMetadata": { | ||
"pythonIndentUnit": 4 | ||
}, | ||
"notebookName": "1. T5-Small on Single GPU", | ||
"widgets": {} | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 0 | ||
} |