From 01e9a59e8982d83a1c8850d9905c451bb80a1d78 Mon Sep 17 00:00:00 2001 From: Michael Clifford Date: Mon, 15 Apr 2024 12:16:22 -0400 Subject: [PATCH] update README Signed-off-by: Michael Clifford --- README.md | 38 ++++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 02313b36..4f7a1d1d 100644 --- a/README.md +++ b/README.md @@ -1,26 +1,24 @@ # AI Lab Recipes This repo contains recipes for building and running containerized AI and LLM -Applications locally with Podman. +Applications with Podman. These containerized AI recipes can be used to help developers quickly prototype -new AI and LLM based applications, without the need for relying on any other +new AI and LLM based applications locally, without the need for relying on any other externally hosted services. Since they are already containerized, it also helps developers move quickly from prototype to production. -## Model services - -[model servers examples](./model_servers) +## Model servers #### What's a model server? -A model server is a program that serves machine-learning models or LLMs and -makes their functions available via API so that applications can incorporate -AI. This repository provides descriptions and files for building several model -servers. +A model server is a program that serves machine-learning models, such as LLMs, and +makes their functions available via an API. This makes it easy for developers to +incorporate AI into their applications. This repository provides descriptions and +code for building several of these model servers. Many of the sample applications rely on the `llamacpp_python` model server by -default. This server can be used for various applications with various models. +default. This server can be used for various generative AI applications with various models. However, each sample application can be paired with a variety of model servers. Learn how to build and run the llamacpp_python model server by following the @@ -28,8 +26,13 @@ Learn how to build and run the llamacpp_python model server by following the ## Current Recipes -There are several sample applications in this repository. They live in the -[recipes](./recipes) folder. +Recipes consist to at least two components: A model server and an AI application. +The model server manages the model, and the AI application provides the specific +logic needed to perform some specific task such as chat, summarization, object +detection, etc. + +There are several sample applications in this repository that can be found in the +[recipes](./recipes) directory. They fall under the categories: @@ -39,15 +42,10 @@ They fall under the categories: * [natural language processing](./recipes/natural_language_processing) -Most of the sample applications follow a similar pattern that includes a -model-server and an inference application. Many sample applications utilize the -[Streamlit UI](https://docs.streamlit.io/). - -Learn how to build and run each application by visiting each of the categories -above. For example -the [chatbot recipe](./recipes/natural_language_processing/chatbot). +Learn how to build and run each application by visiting their README's. +For example, learn how to run the [chatbot recipe here](./recipes/natural_language_processing/chatbot). -## Current Locallm Images built from this repository +## Current AI Lab Recipe images built from this repository Images for many sample applications and models are available in `quay.io`. All currently built images are tracked in