Skip to content

Commit

Permalink
Zero to LitGPT (#1165)
Browse files Browse the repository at this point in the history
Co-authored-by: Carlos Mocholí <[email protected]>
  • Loading branch information
rasbt and carmocca committed Apr 3, 2024
1 parent ccd9181 commit b9daa69
Show file tree
Hide file tree
Showing 8 changed files with 564 additions and 4 deletions.
19 changes: 15 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,12 @@

&nbsp;[Quantization](tutorials/quantize.md): 4-bit floats, 8-bit integers, and double quantization.

&nbsp;[Exporting](https://github.com/Lightning-AI/litgpt/blob/wip/tutorials/convert_lit_models.md) to other popular model weight formats.
&nbsp;[Exporting](tutorials/convert_lit_models.md) to other popular model weight formats.

&nbsp;Many popular datasets for [pretraining](tutorials/pretrain_tinyllama.md) and [finetuning](tutorials/prepare_dataset.md), and [support for custom datasets](tutorials/prepare_dataset.md#preparing-custom-datasets-for-instruction-finetuning).

&nbsp;Readable and easy-to-modify code to experiment with the latest research ideas.


&nbsp;
<br>
&nbsp;
Expand All @@ -59,8 +58,6 @@ The following [Lightning Studio](https://lightning.ai/lightning-ai/studios) temp





&nbsp;
<br>
&nbsp;
Expand Down Expand Up @@ -107,6 +104,14 @@ For more information, refer to the [download](tutorials/download_model_weights.m


&nbsp;

> [!NOTE]
> We recommend starting with the **[Zero to Pretraining, Finetuning, and Using LLMs with LitGPT](https://chat.openai.com/c/tutorial/0_to_litgpt.md)** if you are looking to get started with using LitGPT.


&nbsp;

## Finetuning and pretraining

LitGPT supports [pretraining](tutorials/pretrain_tinyllama.md) and [finetuning](tutorials/finetune.md) to optimize models on excisting or custom datasets. Below is an example showing how to finetune a model with LoRA:
Expand Down Expand Up @@ -324,6 +329,12 @@ If you have general questions about building with LitGPT, please [join our Disco
## Tutorials, how-to guides, and docs
> [!NOTE]
> We recommend starting with the **[Zero to Pretraining, Finetuning, and Using LLMs with LitGPT](https://chat.openai.com/c/tutorial/0_to_litgpt.md)** if you are looking to get started with using LitGPT.
Tutorials and in-depth feature documentation can be found below:
- Finetuning, incl. LoRA, QLoRA, and Adapters ([tutorials/finetune.md](tutorials/finetune.md))
- Pretraining ([tutorials/pretrain_tinyllama.md](tutorials/pretrain_tinyllama.md))
- Model evaluation ([tutorials/evaluation.md](tutorials/evaluation.md))
Expand Down
Loading

0 comments on commit b9daa69

Please sign in to comment.