Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
Signed-off-by: sallyom <[email protected]>
Co-authored-by: MichaelClifford <[email protected]>
  • Loading branch information
sallyom and MichaelClifford committed Mar 28, 2024
1 parent 4a46194 commit 84d37ae
Show file tree
Hide file tree
Showing 13 changed files with 28 additions and 379 deletions.
52 changes: 28 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,51 @@
# Locallm
# AI Lab Recipes

This repo contains recipes for building and running containerized AI and LLM Applications locally with podman.

These containerized AI recipes can be used to help developers quickly prototype new AI and LLM based applications, without the need for relying on any other externally hosted services. Since they are already containerized, it also helps developers move quickly from prototype to production.
These containerized AI recipes can be used to help developers quickly prototype new AI and LLM based applications, without the need for relying
on any other externally hosted services. Since they are already containerized, it also helps developers move quickly from prototype to production.

## Current Recipes:
## Model services

[model servers examples](./model_servers)

* [Model Service](#model-service)
* [Chatbot](#chatbot)
* [Text Summarization](#text-summarization)
* [Code Generation](#code-generation)
* [RAG](#rag-application) (Retrieval Augmented Generation)
* [Fine-tuning](#fine-tuning)
#### What's a model server?

### Model service
A model server is a program that serves machine-learning models and makes their functions available via API so that
applications can incorporate AI. This repository provides descriptions and files for building several model servers.

A model service that can be used for various applications with various models is included in this repository.
Learn how to build and run the model service here: [Llamacpp_python model service](/model_servers/llamacpp_python/README.md).
Many of the sample applications rely on the `llamacpp_python` model server by default. This server can be used for various applications with various models.
However, each sample application can be paired with a variety of model servers.

### Chatbot
Learn how to build and run the llamacpp_python model by following the [llamacpp_python model server README.](/model_servers/llamacpp_python/README.md).

A simple chatbot using the [Streamlit UI](https://docs.streamlit.io/). Learn how to build and run this application here: [Chatbot](/chatbot-langchain/).
## Current Recipes:

### Text Summarization
There are several sample applications in this repository. They live in the [recipes](./recipes) folder.
They fall under the categories:

An LLM app that can summarize arbitrarily long text inputs with the [Streamlit UI](https://docs.streamlit.io/). Learn how to build and run this application here:
[Text Summarization](/summarizer-langchain/).
* [audio](./recipes/audio)
* [computer-vision](./recipes/computer_vision)
* [multimodal](./recipes/multimodal)
* [natural language processing](./recipes/natural_language_processing)

### Code generation

A simple chatbot using the [Streamlit UI](https://docs.streamlit.io/). Learn how to build and run this application here: [Code Generation](/code-generation/).
Most of the sample applications follow a similar pattern that includes a model-server and an inference application.
Many sample applications utilize the [Streamlit UI](https://docs.streamlit.io/).

### RAG
Learn how to build and run each application by visiting each of the categories above. For example
the [chatbot recipe](./recipes/natural_language_processing/chatbot).

A chatbot using the [Streamlit UI](https://docs.streamlit.io/) and Retrieval Augmented Generation. Learn how to build and run this application here: [RAG](/rag-langchain/).

### Fine Tuning
## Fine Tuning

This application allows a user to select a model and a data set they'd like to fine-tune that model on.
Once the application finishes, it outputs a new fine-tuned model for the user to apply to other LLM services.
Learn how to build and run this model training job here: [Fine-tuning](/finetune/).

Learn how to build and run this model training job here: [Fine tuning example](/finetune/).

## Current Locallm Images built from this repository

Images for all sample applications and models are tracked in [locallm-images.md](./locallm-images.md)
Images for many sample applications and models are available in `quay.io`. All currently built images are tracked in
[ai-lab-recipes-images.md](./ai-lab-recipes-images.md)

File renamed without changes.
28 changes: 0 additions & 28 deletions embed-workloads/Containerfile-codegen

This file was deleted.

36 changes: 0 additions & 36 deletions embed-workloads/Containerfile-nvidia

This file was deleted.

99 changes: 0 additions & 99 deletions embed-workloads/README.md

This file was deleted.

28 changes: 0 additions & 28 deletions embed-workloads/quadlets/ai-codegenerator/README.md

This file was deleted.

7 changes: 0 additions & 7 deletions embed-workloads/quadlets/ai-codegenerator/codegen.image

This file was deleted.

16 changes: 0 additions & 16 deletions embed-workloads/quadlets/ai-codegenerator/codegen.kube.example

This file was deleted.

45 changes: 0 additions & 45 deletions embed-workloads/quadlets/ai-codegenerator/codegen.yaml

This file was deleted.

28 changes: 0 additions & 28 deletions embed-workloads/quadlets/ai-summarizer/README.md

This file was deleted.

7 changes: 0 additions & 7 deletions embed-workloads/quadlets/ai-summarizer/summarizer.image

This file was deleted.

16 changes: 0 additions & 16 deletions embed-workloads/quadlets/ai-summarizer/summarizer.kube.example

This file was deleted.

Loading

0 comments on commit 84d37ae

Please sign in to comment.