diff --git a/content/patterns/rag-llm-gitops/_index.md b/content/patterns/rag-llm-gitops/_index.md index ec3084bda..48b425a39 100644 --- a/content/patterns/rag-llm-gitops/_index.md +++ b/content/patterns/rag-llm-gitops/_index.md @@ -23,7 +23,14 @@ ci: ai ## Introduction -This deployment is based on `validated pattern framework` that uses GitOps to easily provision all operators and apps. It deploys a Chatbot application that leverages the power of Large Language Models (LLMs) in conjunction with the Retrieval-Augmented Generation (RAG) framework running on Red Hat OpenShift to generate a project proposal for a given Red Hat product. +This deployment is based on the `validated pattern framework`, using GitOps for +seamless provisioning of all operators and applications. It deploys a Chatbot +application that harnesses the power of Large Language Models (LLMs) combined +with the Retrieval-Augmented Generation (RAG) framework. + +The application uses either the [EDB Postgres for Kubernetes operator](https://catalog.redhat.com/software/container-stacks/detail/5fb41c88abd2a6f7dbe1b37b) +(default) or Redis to store embeddings of Red Hat products, running on Red Hat +OpenShift to generate project proposals for specific Red Hat products. ## Pre-requisites @@ -34,13 +41,15 @@ This deployment is based on `validated pattern framework` that uses GitOps to ea ## Demo Description & Architecture -The goal of this demo is to demonstrate a Chatbot LLM application augmented with data from Red Hat product documentation running on Red Hat OpenShift. It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM. The application generates a project proposal for a Red Hat product +The goal of this demo is to demonstrate a Chatbot LLM application augmented with data from Red Hat product documentation +running on Red Hat OpenShift. It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM. +The application generates a project proposal for a Red Hat product ### Key Features - LLM Application augmented with content from Red Hat product documentation. - Multiple LLM providers (OpenAI, Hugging Face, NVIDIA) -- Vector Database, such as PGVECTOR or REDIS, to store embeddings of RedHat product documentation. +- Vector Database, such as EDB Postgres for Kubernetes or Redis, to store embeddings of RedHat product documentation. - Monitoring dashboard to provide key metrics such as ratings - GitOps setup to deploy e2e demo (frontend / vector database / served models) diff --git a/content/patterns/rag-llm-gitops/getting-started.md b/content/patterns/rag-llm-gitops/getting-started.md index fa7a6beb3..cd66a9f7e 100644 --- a/content/patterns/rag-llm-gitops/getting-started.md +++ b/content/patterns/rag-llm-gitops/getting-started.md @@ -58,7 +58,7 @@ _Figure 6. Proposed demo architecture with OpenShift AI_ ### Components deployed - **Hugging Face Text Generation Inference Server:** The pattern deploys a Hugging Face TGIS server. The server deploys `mistral-community/Mistral-7B-v0.2` model. The server will require a GPU node. -- **EDB (PGVECTOR) / Redis Server:** A Vector Database server is deployed to store vector embeddings created from Red Hat product documentation. +- **EDB Postgres for Kubernetes / Redis Server:** A Vector Database server is deployed to store vector embeddings created from Red Hat product documentation - **Populate VectorDb Job:** The job creates the embeddings and populates the vector database. - **LLM Application:** This is a Chatbot application that can generate a project proposal by augmenting the LLM with the Red Hat product documentation stored in vector db. - **Prometheus:** Deploys a prometheus instance to store the various metrics from the LLM application and TGIS server. @@ -99,7 +99,7 @@ Alternatiely, follow the [instructions](../gpu_provisioning) to manually install ### Deploy application -***Note:**: This pattern supports two types of vector databases, PGVECTOR and REDIS. By default the pattern will deploy PGVECTOR as a vector DB. To deploy REDIS, change the global.db.type to REDIS in [values-global.yaml](./values-global.yaml). +***Note:**: This pattern supports two types of vector databases, EDB Postgres for Kubernetes and Redis. By default the pattern will deploy EDB Postgres for Kubernetes as a vector DB. To deploy Redis, change the global.db.type to REDIS in [values-global.yaml](./values-global.yaml). ```yaml --- @@ -109,10 +109,10 @@ global: useCSV: false syncPolicy: Automatic installPlanApproval: Automatic -# Possible value for db.type = [REDIS, PGVECTOR] +# Possible value for db.type = [REDIS, EDB] db: index: docs - type: PGVECTOR <--- Default is PGVECTOR, Change the db type to REDIS for REDIS deployment + type: EDB <--- Default is EDB, Change the db type to REDIS for Redis deployment main: clusterGroupName: hub multiSourceConfig: