Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dataset usage #1257

Merged
merged 7 commits into from
Apr 5, 2024
Merged
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 18 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ pip install 'litgpt[all]'

<details>
<summary>Advanced install options</summary>

Install from source:

```bash
Expand All @@ -59,6 +59,9 @@ pip install -e '.[all]'
# Get started
LitGPT is a command-line tool to use, pretrain, finetune and deploy LLMs.


&nbsp;

### Use an LLM
Here's an example showing how to use the Mistral 7B LLM.

Expand Down Expand Up @@ -105,10 +108,10 @@ Train an LLM from scratch on your own data via [pretraining](tutorials/pretrain.
litgpt download --repo_id microsoft/phi-2

# 2) Finetune the model
litgpt pretrain lora \
--checkpoint_dir checkpoints/microsoft/phi-2 \
litgpt pretrain \
--initial_checkpoint_dir checkpoints/microsoft/phi-2 \
--data Alpaca2k \
--out_dir out/phi-2-lora
--out_dir out/phi-2
rasbt marked this conversation as resolved.
Show resolved Hide resolved

# 3) Chat with the model
litgpt chat \
Expand Down Expand Up @@ -148,14 +151,14 @@ Finetune an LLM on your own data:
| StableLM by Stability AI | 3B, 7B | [Stability AI 2023](https://github.com/Stability-AI/StableLM) |
| StableLM Zephyr by Stability AI | 3B | [Stability AI 2023](https://stability.ai/blog/stablecode-llm-generative-ai-coding) |
| TinyLlama by Zhang et al. | 1.1B | [Zhang et al. 2023](https://github.com/jzhang38/TinyLlama) |
| Vicuna by LMSYS | 7B, 13B, 33B | [Li et al. 2023](https://lmsys.org/blog/2023-03-30-vicuna/)
| Vicuna by LMSYS | 7B, 13B, 33B | [Li et al. 2023](https://lmsys.org/blog/2023-03-30-vicuna/)

&nbsp;
&nbsp;

## State-of-the-art features
✅ &nbsp;State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](extensions/xla).

✅ &nbsp;[Pretrain](tutorials/pretrain.md), [finetune](tutorials/finetune.md), and [deploy](tutorials/inference.md)
✅ &nbsp;[Pretrain](tutorials/pretrain.md), [finetune](tutorials/finetune.md), and [deploy](tutorials/inference.md)

✅ &nbsp;Various precision settings: FP32, FP16, BF16, and FP16/FP32 mixed.

Expand Down Expand Up @@ -192,7 +195,10 @@ The following [Lightning Studio](https://lightning.ai/lightning-ai/studios) temp
<br>
&nbsp;

&nbsp;




# Use optimized configurations

LitGPT comes with out-of-the-box, highly performant settings via our YAML configs.
Expand All @@ -202,15 +208,15 @@ litgpt finetune lora \
--config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml
```

Override any parameter in the CLI:
Override any parameter in the CLI:

```bash
litgpt finetune lora \
--config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \
--lora_r 4
```

Browse the available configuration files [here](config_hub).
Browse the available configuration files [here](config_hub).

&nbsp;

Expand Down Expand Up @@ -409,7 +415,7 @@ Tutorials and in-depth feature documentation can be found below:

## XLA

Lightning AI has partnered with Google to add first-class support for [Cloud TPUs](https://cloud.google.com/tpu) in [Lightnings frameworks](https://github.com/Lightning-AI/lightning) and LitGPT,
Lightning AI has partnered with Google to add first-class support for [Cloud TPUs](https://cloud.google.com/tpu) in [Lightning's frameworks](https://github.com/Lightning-AI/lightning) and LitGPT,
helping democratize AI for millions of developers and researchers worldwide.

Using TPUs with Lightning is as straightforward as changing one line of code.
Expand All @@ -436,7 +442,7 @@ This implementation extends on [Lit-LLaMA](https://github.com/lightning-AI/lit-l

## Community showcase

Check out the projects below using and building on LitGPT. If you have a project you'd like to add to this section, please don't hestiate to open a pull request.
Check out the projects below that use and build on LitGPT. If you have a project you'd like to add to this section, please don't hesitate to open a pull request.

&nbsp;

Expand Down
Loading