From 7d2b726dde1a1e7688f9026467cd22d33628ea86 Mon Sep 17 00:00:00 2001 From: Sebastian Raschka Date: Sun, 14 Apr 2024 08:17:30 -0500 Subject: [PATCH] Update README.md --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index 1a184f6687..5cc99a94d4 100644 --- a/README.md +++ b/README.md @@ -105,6 +105,8 @@ litgpt chat \ --checkpoint_dir out/phi-2-lora/final ``` +  + ### Pretrain an LLM Train an LLM from scratch on your own data via pretraining: @@ -132,6 +134,8 @@ litgpt chat \ --checkpoint_dir out/custom-model/final ``` +  + ### Continue pretraining an LLM This is another way of finetuning that specialize an already pretrained model by training on custom data: