Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
williamFalcon authored Apr 16, 2024
1 parent 8685b55 commit e1f5540
Showing 1 changed file with 3 additions and 5 deletions.
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ We reimplemented all model architectures and training recipes from scratch for 4
1. Remove all abstraction layers and have single file implementations.
2. Guarantee Apache 2.0 compliance to enable enterprise use without limits.
3. Optimized each model architectural detail to maximize performance, reduce costs, and speed up training.
4. Highly-optimized [recipe configs](https://github.com/Lightning-AI/litgpt/tree/main/config_hub) we have tested at enterprise scale.
4. Highly-optimized [recipe configs](#training-recipes) we have tested at enterprise scale.

 

Expand Down Expand Up @@ -231,9 +231,9 @@ Use, Finetune, pretrain, deploy over 20+ LLMs ([full list](tutorials/download_mo

# Training recipes

LitGPT comes with validated recipes (YAML configs) to train models under different conditions.
LitGPT comes with validated recipes (YAML configs) to train models under different conditions. We've generated these recipes based on the parameters we found to perform the best for different training conditions.

We've generated these recipes based on the parameters we found to perform the best for different training conditions.
Browse all training recipes [here](config_hub).

### Example

Expand All @@ -242,8 +242,6 @@ litgpt finetune lora \
--config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml
```

Browse all training recipes [here](config_hub).

### What is a config
Configs let you customize training for all granular parameters like:

Expand Down

0 comments on commit e1f5540

Please sign in to comment.