From 71ed4dedca869e148bcfd35b8fc71e096aed82c9 Mon Sep 17 00:00:00 2001 From: William Falcon Date: Fri, 19 Apr 2024 06:50:32 -0400 Subject: [PATCH] Update README.md --- README.md | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 303b19baa6..c3634dcf87 100644 --- a/README.md +++ b/README.md @@ -18,11 +18,11 @@ Uses the latest state-of-the-art techniques:

Lightning AIModels • - Install • - Get started • - Evaluate • + Quick start • + InferenceFinetunePretrain • + DeployFeaturesTraining recipes (YAML)

@@ -32,7 +32,7 @@ Uses the latest state-of-the-art techniques:   # Finetune, pretrain and deploy LLMs Lightning fast ⚡⚡ -LitGPT is a command-line tool designed to easily [finetune](#finetune-an-llm), [pretrain](#pretrain-an-llm), [evaluate](#use-an-llm), and deploy [20+ LLMs](#choose-from-20-llms) **on your own data**. It features highly-optimized [training recipes](#training-recipes) for the world's most powerful open-source large language models (LLMs). +LitGPT is a command-line tool designed to easily [finetune](#finetune-an-llm), [pretrain](#pretrain-an-llm), [evaluate](#use-an-llm), and [deploy](#deploy-an-llm) [20+ LLMs](#choose-from-20-llms) **on your own data**. It features highly-optimized [training recipes](#training-recipes) for the world's most powerful open-source large language models (LLMs). We reimplemented all model architectures and training recipes from scratch for 4 reasons: @@ -112,7 +112,7 @@ pip install -e '.[all]' --- -# Get started +# Quick start After installing LitGPT, select the model and action you want to take on that model (finetune, pretrain, evaluate, deploy, etc...): ```bash @@ -126,7 +126,8 @@ litgpt serve mistralai/Mistral-7B-Instruct-v0.2   -### Use an LLM +### Use an LLM for inference +Use LLMs for inference to test its chatting capabilities, run evaluations, or extract embeddings, etc... Here's an example showing how to use the Mistral 7B LLM. @@ -251,8 +252,7 @@ litgpt chat \   ### Deploy an LLM - -This example illustrates how to deploy an LLM using LitGPT. +Once you're ready to deploy a finetuned LLM, run this command: Open In Studio @@ -261,13 +261,15 @@ This example illustrates how to deploy an LLM using LitGPT.   ```bash -# 1) Download a pretrained model (alternatively, use your own finetuned model) -litgpt download --repo_id microsoft/phi-2 +# locate the checkpoint to your finetuned or pretrained model and call the `serve` command: +litgpt serve --checkpoint_dir path/to/your/checkpoint/microsoft/phi-2 -# 2) Start the server +# Alternative: if you haven't finetuned, download any checkpoint to deploy it: +litgpt download --repo_id microsoft/phi-2 litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2 ``` +Test the server in a separate terminal and integrate the model API into your AI product: ```python # 3) Use the server (in a separate session) import requests, json