Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explain dataset options #1407

Merged
merged 2 commits into from
May 10, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 80 additions & 0 deletions tutorials/prepare_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ For the following examples, we will focus on finetuning with the `litgpt/finetun
However, the same steps apply to all other models and finetuning scripts.
Please read the [tutorials/finetune_*.md](.) documents for more information about finetuning models.

 

> [!IMPORTANT]
> By default, the maximum sequence length is obtained from the model configuration file. In case you run into out-of-memory errors, especially in the cases of LIMA and Dolly,
> you can try to lower the context length by setting the `--train.max_seq_length` parameter, for example, `litgpt finetune lora --train.max_seq_length 256`. For more information on truncating datasets, see the *Truncating datasets* section in the Alpaca section near the top of this article.
Expand All @@ -50,6 +52,13 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

 

> [!TIP]
> Use `litgpt finetune --data.help Alpaca` to list additional dataset-specific command line options.

 

#### Truncating datasets

By default, the finetuning scripts will determine the size of the longest tokenized sample in the dataset to determine the block size. However, if you are willing to truncate a few examples in the training set, you can reduce the computational resource requirements significantly. For instance you can set a sequence length threshold via `--train.max_seq_length`. We can determine an appropriate maximum sequence length by considering the distribution of the data sample lengths shown in the histogram below.
Expand All @@ -73,8 +82,24 @@ For comparison, the Falcon 7B model requires 23.52 GB of memory for the original

[Alpaca-2k](https://huggingface.co/datasets/mhenrichsen/alpaca_2k_test) is a smaller, 2000-sample subset of Alpaca described above.

```bash
litgpt finetune lora \
--data Alpaca2k \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

 

> [!TIP]
> Use `litgpt finetune --data.help Alpaca2k` to list additional dataset-specific command line options.

 

The Alpaca-2k dataset distribution is shown below.

<img src="images/prepare_dataset/alpaca-2k.jpg" width=400px>


### Alpaca-GPT4

The Alpaca-GPT4 was built by using the prompts of the original Alpaca dataset and generate the responses via GPT 4. The
Expand All @@ -88,6 +113,13 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help AlpacaGPT4` to list additional dataset-specific command line options.

&nbsp;

The Alpaca-GPT4 dataset distribution is shown below.

<img src="images/prepare_dataset/alpacagpt4.jpg" width=400px>
Expand All @@ -108,6 +140,13 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help Alpaca` to list additional dataset-specific command line options.

&nbsp;

The Alpaca Libre dataset distribution is shown below.

<img src="images/prepare_dataset/alpaca_libre.jpg" width=400px>
Expand Down Expand Up @@ -136,6 +175,14 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

&nbsp;


> [!TIP]
> Use `litgpt finetune --data.help Deita` to list additional dataset-specific command line options.

&nbsp;

Deita contains multiturn conversations. By default, only the first instruction-response pairs from
each of these multiturn conversations are included. If you want to override this behavior and include the follow-up instructions
and responses, set `--data.include_multiturn_conversations True`, which will include all multiturn conversations as regular
Expand Down Expand Up @@ -172,6 +219,13 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b" \
```

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help Dolly` to list additional dataset-specific command line options.

&nbsp;

The Dolly dataset distribution is shown below.

<img src="images/prepare_dataset/dolly.jpg" width=400px>
Expand Down Expand Up @@ -228,6 +282,13 @@ litgpt finetune lora \

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help LongForm` to list additional dataset-specific command line options.

&nbsp;

&nbsp;

### LIMA

The LIMA dataset is a collection of 1,000 carefully curated prompts and responses, as described in the [LIMA: Less Is More for Alignment](https://arxiv.org/abs/2305.11206) paper. The dataset is sourced from three community Q&A websites: Stack Exchange, wikiHow, and the Pushshift Reddit Dataset. In addition, it also contains prompts and answers written and collected by the authors of the LIMA paper.
Expand All @@ -242,6 +303,13 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help LIMA` to list additional dataset-specific command line options.

&nbsp;

LIMA contains a handful of multiturn conversations. By default, only the first instruction-response pairs from
each of these multiturn conversations are included. If you want to override this behavior and include the follow-up instructions
and responses, set `--data.include_multiturn_conversations True`.
Expand Down Expand Up @@ -283,6 +351,13 @@ litgpt finetune lora \
--checkpoint_dir "checkpoints/tiiuae/falcon-7b"
```

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help FLAN` to list additional dataset-specific command line options.

&nbsp;

You can find a list of all 66 supported subsets [here](https://huggingface.co/datasets/Muennighoff/flan).

&nbsp;
Expand Down Expand Up @@ -365,6 +440,11 @@ You can also pass a directory containing a `train.json` and `val.json` to `--dat

&nbsp;

> [!TIP]
> Use `litgpt finetune --data.help JSON` to list additional dataset-specific command line options.

&nbsp;

### Preparing Custom Datasets Using DataModule

If you don't have a JSON file following the format described in the previous section, the easiest way to prepare a new dataset is to copy and modify one of the existing data modules in LitGPT:
Expand Down