Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zero to LitGPT #1165

Merged
merged 17 commits into from
Mar 28, 2024
19 changes: 15 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,12 @@

✅  [Quantization](tutorials/quantize.md): 4-bit floats, 8-bit integers, and double quantization.

✅  [Exporting](https://github.com/Lightning-AI/litgpt/blob/wip/tutorials/convert_lit_models.md) to other popular model weight formats.
✅  [Exporting](tutorials/convert_lit_models.md) to other popular model weight formats.

✅  Many popular datasets for [pretraining](tutorials/pretrain_tinyllama.md) and [finetuning](tutorials/prepare_dataset.md), and [support for custom datasets](tutorials/prepare_dataset.md#preparing-custom-datasets-for-instruction-finetuning).

✅  Readable and easy-to-modify code to experiment with the latest research ideas.


 
<br>
&nbsp;
Expand All @@ -59,8 +58,6 @@ The following [Lightning Studio](https://lightning.ai/lightning-ai/studios) temp





&nbsp;
<br>
&nbsp;
Expand Down Expand Up @@ -107,6 +104,14 @@ For more information, refer to the [download](tutorials/download_model_weights.m


&nbsp;

> [!NOTE]
> We recommend starting with the **[Zero to Pretraining, Finetuning, and Using LLMs with LitGPT](https://chat.openai.com/c/tutorial/0_to_litgpt.md)** if you are looking to get started with using LitGPT.



&nbsp;

## Finetuning and pretraining

LitGPT supports [pretraining](tutorials/pretrain_tinyllama.md) and [finetuning](tutorials/finetune.md) to optimize models on excisting or custom datasets. Below is an example showing how to finetune a model with LoRA:
Expand Down Expand Up @@ -324,6 +329,12 @@ If you have general questions about building with LitGPT, please [join our Disco

## Tutorials, how-to guides, and docs


> [!NOTE]
> We recommend starting with the **[Zero to Pretraining, Finetuning, and Using LLMs with LitGPT](https://chat.openai.com/c/tutorial/0_to_litgpt.md)** if you are looking to get started with using LitGPT.

Tutorials and in-depth feature documentation can be found below:

- Finetuning, incl. LoRA, QLoRA, and Adapters ([tutorials/finetune.md](tutorials/finetune.md))
- Pretraining ([tutorials/pretrain_tinyllama.md](tutorials/pretrain_tinyllama.md))
- Model evaluation ([tutorials/evaluation.md](tutorials/evaluation.md))
Expand Down
Loading