From 5ff294860c521d41e9e4d15514d8b42ee3b10931 Mon Sep 17 00:00:00 2001 From: Maryam Tahhan Date: Fri, 25 Oct 2024 08:53:58 -0400 Subject: [PATCH] chore: fix trailing whitespace + EOL issues Signed-off-by: Maryam Tahhan --- .gitattributes | 2 +- .github/PULL_REQUEST_TEMPLATE.md | 6 +++--- .github/workflows/release.yaml | 1 - .npmrc | 1 - MIGRATION.md | 10 +++++----- PACKAGING-GUIDE.md | 11 ++++------- RELEASE.md | 5 +---- api/openapi.yaml | 6 +++--- docs/proposals/ai-studio.md | 14 +++++++------- docs/proposals/state-management.md | 9 ++++----- .../src/templates/python-langchain.mustache | 4 ++-- .../src/templates/quarkus-langchain4j.mustache | 2 -- 12 files changed, 30 insertions(+), 41 deletions(-) diff --git a/.gitattributes b/.gitattributes index 94f480de9..6313b56c5 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1 +1 @@ -* text=auto eol=lf \ No newline at end of file +* text=auto eol=lf diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 6da2510f6..ab2807a06 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -2,14 +2,14 @@ ### Screenshot / video of UI - ### What issues does this PR fix or reference? - ### How to test this PR? - \ No newline at end of file + diff --git a/.github/workflows/release.yaml b/.github/workflows/release.yaml index 450b9fb39..c1c109691 100644 --- a/.github/workflows/release.yaml +++ b/.github/workflows/release.yaml @@ -161,4 +161,3 @@ jobs: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: id: ${{ needs.tag.outputs.releaseId}} - diff --git a/.npmrc b/.npmrc index 919b37d40..d67f37488 100644 --- a/.npmrc +++ b/.npmrc @@ -1,2 +1 @@ node-linker=hoisted - diff --git a/MIGRATION.md b/MIGRATION.md index 72739159b..34453fda6 100644 --- a/MIGRATION.md +++ b/MIGRATION.md @@ -5,7 +5,7 @@ Before **Podman AI Lab** `v1.2.0` the [user-catalog](./PACKAGING-GUIDE.md#applicationcatalog) was not versioned. Starting from `v1.2.0` the user-catalog require to have a `version` property. -> [!NOTE] +> [!NOTE] > The `user-catalog.json` file can be found in `~/.local/share/containers/podman-desktop/extensions-storage/redhat.ai-lab`. The list of catalog versions can be found in [packages/backend/src/utils/catalogUtils.ts](https://github.com/containers/podman-desktop-extension-ai-lab/blob/main/packages/backend/src/utils/catalogUtils.ts) @@ -14,7 +14,7 @@ The catalog has its own version number, as we may not require to update it with ## `None` to Catalog `1.0` -`None` represents any catalog version prior to the first versioning. +`None` represents any catalog version prior to the first versioning. Version `1.0` of the catalog adds an important property to models `backend`, defining the type of framework required by the model to run (E.g. LLamaCPP, WhisperCPP). @@ -22,20 +22,20 @@ Version `1.0` of the catalog adds an important property to models `backend`, def You can either delete any existing `user-catalog` by deleting the `~/.local/share/containers/podman-desktop/extensions-storage/redhat.ai-lab/user-catalog.json`. -> [!WARNING] +> [!WARNING] > This will remove the models you have imported from the catalog. You will be able to import it again afterward. If you want to keep the data, you can migrate it by updating certain properties within the recipes and models fields. ### Recipes -The recipe object has a new property `backend` which defines which framework is required. +The recipe object has a new property `backend` which defines which framework is required. Value accepted are `llama-cpp`, `whisper-cpp` and `none`. Moreover, the `models` property has been changed to `recommended`. > [!TIP] -> Before Podman AI Lab version v1.2 recipes uses the `models` property to list the models compatible. Now all models using the same `backend` could be used. We introduced `recommended` to highlight certain models. +> Before Podman AI Lab version v1.2 recipes uses the `models` property to list the models compatible. Now all models using the same `backend` could be used. We introduced `recommended` to highlight certain models. **Example** diff --git a/PACKAGING-GUIDE.md b/PACKAGING-GUIDE.md index fc2eb83b8..5c338f108 100644 --- a/PACKAGING-GUIDE.md +++ b/PACKAGING-GUIDE.md @@ -41,7 +41,7 @@ A model has the following attributes: - ```license```: the license under which the model is available - ```url```: the URL used to download the model - ```memory```: the memory footprint of the model in bytes, as computed by the workflow `.github/workflows/compute-model-sizes.yaml` -- ```sha256```: the SHA-256 checksum to be used to verify the downloaded model is identical to the original. It is optional and it must be HEX encoded +- ```sha256```: the SHA-256 checksum to be used to verify the downloaded model is identical to the original. It is optional and it must be HEX encoded #### Recipes @@ -65,7 +65,7 @@ The configuration file is called ```ai-lab.yaml``` and follows the following syn The root elements are called ```version``` and ```application```. -```version``` represents the version of the specifications that ai-lab adheres to (so far, the only accepted value here is `v1.0`). +```version``` represents the version of the specifications that ai-lab adheres to (so far, the only accepted value here is `v1.0`). ```application``` contains an attribute called ```containers``` whose syntax is an array of objects containing the following attributes: - ```name```: the name of the container @@ -102,15 +102,12 @@ application: - name: chatbot-model-servicecuda contextdir: model_services containerfile: cuda/Containerfile - model-service: true + model-service: true gpu-env: - cuda - arch: + arch: - amd64 ports: - 8501 image: quay.io/redhat-et/model_services:latest ``` - - - diff --git a/RELEASE.md b/RELEASE.md index e5795d887..ff4537ecf 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -14,7 +14,7 @@ Below is what a typical release week may look like: - **Monday (Notify):** 48-hour notification. Communicate to maintainers and public channels a release will be cut on Wednesday and to merge any pending PRs. Inform QE team. Start work on blog post as it is usually the longest part of the release process. - **Tuesday (Staging, Testing & Blog):** Stage the release (see instructions below) to create a new cut of the release to test. Test the pre-release (master branch) build briefly. Get feedback from committers (if applicable). Push the blog post for review (as it usually takes a few back-and-forth reviews on documentation). -- **Wednesday (Release):** Publish the new release on the catalog using the below release process. +- **Wednesday (Release):** Publish the new release on the catalog using the below release process. - **Thursday (Post-release Testing & Blog):** Test the post-release build briefly for any critical bugs. Confirm that new release has been pushed to the catalog. Push the blog post live. Get a known issues list together from QE and publish to the Podman Desktop Discussions, link to this from the release notes. - **Friday (Communicate):** Friday is statistically the best day for new announcements. Post on internal channels. Post on reddit, hackernews, twitter, etc. @@ -58,6 +58,3 @@ Pre-requisites: #### Catalog Create and submit a PR to the catalog (https://github.com/containers/podman-desktop-catalog on branch gh-pages). This is manual and will be automated in the future. - - - diff --git a/api/openapi.yaml b/api/openapi.yaml index 79c31b530..ea817a0a8 100644 --- a/api/openapi.yaml +++ b/api/openapi.yaml @@ -56,7 +56,7 @@ paths: operationId: pullModel tags: - models - description: | + description: | Download a model from the Podman AI Lab catalog. summary: | Download a model from the Podman AI Lab Catalog. @@ -139,9 +139,9 @@ components: stream: type: boolean description: | - If false the response will be returned as a single response object, + If false the response will be returned as a single response object, rather than a stream of objects - required: + required: - model ProgressResponse: diff --git a/docs/proposals/ai-studio.md b/docs/proposals/ai-studio.md index 77c66a81e..b1878d7cc 100644 --- a/docs/proposals/ai-studio.md +++ b/docs/proposals/ai-studio.md @@ -34,7 +34,7 @@ application: contextdir: model_services containerfile: base/Containerfile model-service: true - backend: + backend: - llama arch: - arm64 @@ -42,12 +42,12 @@ application: - name: chatbot-model-servicecuda contextdir: model_services containerfile: cuda/Containerfile - model-service: true - backend: + model-service: true + backend: - llama gpu-env: - cuda - arch: + arch: - amd64 ``` @@ -74,7 +74,7 @@ application: exec: # added command: # added - curl -f localhost:7860 || exit 1 # added - backend: + backend: - llama arch: - arm64 @@ -87,11 +87,11 @@ application: exec: # added command: # added - curl -f localhost:7860 || exit 1 # added - backend: + backend: - llama gpu-env: - cuda - arch: + arch: - amd64 ``` diff --git a/docs/proposals/state-management.md b/docs/proposals/state-management.md index 9ec57b45c..38de7c808 100644 --- a/docs/proposals/state-management.md +++ b/docs/proposals/state-management.md @@ -1,9 +1,9 @@ # State management -The backend manages and persists the State. The backend pushes new state to the front-end +The backend manages and persists the State. The backend pushes new state to the front-end when changes happen, and the front-end can ask for the current value of the state. -The front-end uses `readable` stores to expose the state to the different pages. The store +The front-end uses `readable` stores to expose the state to the different pages. The store listens for new states pushed by the backend (`onMessage`), and asks for the current state at initial time. @@ -14,7 +14,7 @@ The pages of the front-end subscribe to the store to get the value of the state The catalog is persisted as a file in the user's filesystem. The backend reads the file at startup, and watches the file for changes. The backend updates the state as soon as changes it detects changes. -The front-end uses a `readable` store, which waits for changes on the Catalog state +The front-end uses a `readable` store, which waits for changes on the Catalog state (using `onMessage('new-catalog-state', data)`), and asks for the current state at startup (with `postMessage('ask-catalog-state')`). @@ -23,7 +23,7 @@ of the Catalog state in a reactive manner. ## Pulled applications -The front-end initiates the pulling of an application (using `postMessage('pull-application', app-id)`). +The front-end initiates the pulling of an application (using `postMessage('pull-application', app-id)`). The backend manages and persists the state of the pulled applications and pushes every update on the state (progression, etc.) (using `postMessage('new-pulled-application-state, app-id, data)`). @@ -49,4 +49,3 @@ and asks for the current state at startup (using `postMessage('ask-error-state)` The interested pages of the front-end subscribe to the store to display the errors related to the page. The user can acknowledge an error (using a `postMessage('ack-error', id)`). - diff --git a/packages/backend/src/templates/python-langchain.mustache b/packages/backend/src/templates/python-langchain.mustache index 225780a3c..dea13560b 100644 --- a/packages/backend/src/templates/python-langchain.mustache +++ b/packages/backend/src/templates/python-langchain.mustache @@ -1,6 +1,6 @@ pip ======= -pip install langchain langchain-openai +pip install langchain langchain-openai AiService.py ============== @@ -10,7 +10,7 @@ from langchain_core.prompts import ChatPromptTemplate model_service = "{{{ endpoint }}}" -llm = OpenAI(base_url=model_service, +llm = OpenAI(base_url=model_service, api_key="sk-no-key-required", streaming=True) prompt = ChatPromptTemplate.from_messages([ diff --git a/packages/backend/src/templates/quarkus-langchain4j.mustache b/packages/backend/src/templates/quarkus-langchain4j.mustache index c9fe2ce03..32453f862 100644 --- a/packages/backend/src/templates/quarkus-langchain4j.mustache +++ b/packages/backend/src/templates/quarkus-langchain4j.mustache @@ -32,5 +32,3 @@ String request(String question); ====== Inject AIService into REST resource or other CDI resource and use the request method to call the LLM model. That's it - -