Skip to content

Commit

Permalink
chore: fix trailing whitespace + EOL issues
Browse files Browse the repository at this point in the history
Signed-off-by: Maryam Tahhan <[email protected]>
  • Loading branch information
maryamtahhan authored and benoitf committed Oct 25, 2024
1 parent 0087337 commit 5ff2948
Show file tree
Hide file tree
Showing 12 changed files with 30 additions and 41 deletions.
2 changes: 1 addition & 1 deletion .gitattributes
Original file line number Diff line number Diff line change
@@ -1 +1 @@
* text=auto eol=lf
* text=auto eol=lf
6 changes: 3 additions & 3 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@

### Screenshot / video of UI

<!-- If this PR is changing UI, please include
<!-- If this PR is changing UI, please include
screenshots or screencasts showing the difference -->

### What issues does this PR fix or reference?

<!-- Include any related issues from Podman Desktop
<!-- Include any related issues from Podman Desktop
repository (or from another issue tracker). -->

### How to test this PR?

<!-- Please explain steps to reproduce -->
<!-- Please explain steps to reproduce -->
1 change: 0 additions & 1 deletion .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -161,4 +161,3 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
id: ${{ needs.tag.outputs.releaseId}}

1 change: 0 additions & 1 deletion .npmrc
Original file line number Diff line number Diff line change
@@ -1,2 +1 @@
node-linker=hoisted

10 changes: 5 additions & 5 deletions MIGRATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Before **Podman AI Lab** `v1.2.0` the [user-catalog](./PACKAGING-GUIDE.md#applicationcatalog) was not versioned.
Starting from `v1.2.0` the user-catalog require to have a `version` property.

> [!NOTE]
> [!NOTE]
> The `user-catalog.json` file can be found in `~/.local/share/containers/podman-desktop/extensions-storage/redhat.ai-lab`.
The list of catalog versions can be found in [packages/backend/src/utils/catalogUtils.ts](https://github.com/containers/podman-desktop-extension-ai-lab/blob/main/packages/backend/src/utils/catalogUtils.ts)
Expand All @@ -14,28 +14,28 @@ The catalog has its own version number, as we may not require to update it with

## `None` to Catalog `1.0`

`None` represents any catalog version prior to the first versioning.
`None` represents any catalog version prior to the first versioning.

Version `1.0` of the catalog adds an important property to models `backend`, defining the type of framework required by the model to run (E.g. LLamaCPP, WhisperCPP).

### 🛠️ How to migrate

You can either delete any existing `user-catalog` by deleting the `~/.local/share/containers/podman-desktop/extensions-storage/redhat.ai-lab/user-catalog.json`.

> [!WARNING]
> [!WARNING]
> This will remove the models you have imported from the catalog. You will be able to import it again afterward.
If you want to keep the data, you can migrate it by updating certain properties within the recipes and models fields.

### Recipes

The recipe object has a new property `backend` which defines which framework is required.
The recipe object has a new property `backend` which defines which framework is required.
Value accepted are `llama-cpp`, `whisper-cpp` and `none`.

Moreover, the `models` property has been changed to `recommended`.

> [!TIP]
> Before Podman AI Lab version v1.2 recipes uses the `models` property to list the models compatible. Now all models using the same `backend` could be used. We introduced `recommended` to highlight certain models.
> Before Podman AI Lab version v1.2 recipes uses the `models` property to list the models compatible. Now all models using the same `backend` could be used. We introduced `recommended` to highlight certain models.
**Example**

Expand Down
11 changes: 4 additions & 7 deletions PACKAGING-GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ A model has the following attributes:
- ```license```: the license under which the model is available
- ```url```: the URL used to download the model
- ```memory```: the memory footprint of the model in bytes, as computed by the workflow `.github/workflows/compute-model-sizes.yaml`
- ```sha256```: the SHA-256 checksum to be used to verify the downloaded model is identical to the original. It is optional and it must be HEX encoded
- ```sha256```: the SHA-256 checksum to be used to verify the downloaded model is identical to the original. It is optional and it must be HEX encoded

#### Recipes

Expand All @@ -65,7 +65,7 @@ The configuration file is called ```ai-lab.yaml``` and follows the following syn

The root elements are called ```version``` and ```application```.

```version``` represents the version of the specifications that ai-lab adheres to (so far, the only accepted value here is `v1.0`).
```version``` represents the version of the specifications that ai-lab adheres to (so far, the only accepted value here is `v1.0`).

```application``` contains an attribute called ```containers``` whose syntax is an array of objects containing the following attributes:
- ```name```: the name of the container
Expand Down Expand Up @@ -102,15 +102,12 @@ application:
- name: chatbot-model-servicecuda
contextdir: model_services
containerfile: cuda/Containerfile
model-service: true
model-service: true
gpu-env:
- cuda
arch:
arch:
- amd64
ports:
- 8501
image: quay.io/redhat-et/model_services:latest
```
5 changes: 1 addition & 4 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Below is what a typical release week may look like:

- **Monday (Notify):** 48-hour notification. Communicate to maintainers and public channels a release will be cut on Wednesday and to merge any pending PRs. Inform QE team. Start work on blog post as it is usually the longest part of the release process.
- **Tuesday (Staging, Testing & Blog):** Stage the release (see instructions below) to create a new cut of the release to test. Test the pre-release (master branch) build briefly. Get feedback from committers (if applicable). Push the blog post for review (as it usually takes a few back-and-forth reviews on documentation).
- **Wednesday (Release):** Publish the new release on the catalog using the below release process.
- **Wednesday (Release):** Publish the new release on the catalog using the below release process.
- **Thursday (Post-release Testing & Blog):** Test the post-release build briefly for any critical bugs. Confirm that new release has been pushed to the catalog. Push the blog post live. Get a known issues list together from QE and publish to the Podman Desktop Discussions, link to this from the release notes.
- **Friday (Communicate):** Friday is statistically the best day for new announcements. Post on internal channels. Post on reddit, hackernews, twitter, etc.

Expand Down Expand Up @@ -58,6 +58,3 @@ Pre-requisites:
#### Catalog

Create and submit a PR to the catalog (https://github.com/containers/podman-desktop-catalog on branch gh-pages). This is manual and will be automated in the future.



6 changes: 3 additions & 3 deletions api/openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ paths:
operationId: pullModel
tags:
- models
description: |
description: |
Download a model from the Podman AI Lab catalog.
summary: |
Download a model from the Podman AI Lab Catalog.
Expand Down Expand Up @@ -139,9 +139,9 @@ components:
stream:
type: boolean
description: |
If false the response will be returned as a single response object,
If false the response will be returned as a single response object,
rather than a stream of objects
required:
required:
- model

ProgressResponse:
Expand Down
14 changes: 7 additions & 7 deletions docs/proposals/ai-studio.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,20 +34,20 @@ application:
contextdir: model_services
containerfile: base/Containerfile
model-service: true
backend:
backend:
- llama
arch:
- arm64
- amd64
- name: chatbot-model-servicecuda
contextdir: model_services
containerfile: cuda/Containerfile
model-service: true
backend:
model-service: true
backend:
- llama
gpu-env:
- cuda
arch:
arch:
- amd64
```
Expand All @@ -74,7 +74,7 @@ application:
exec: # added
command: # added
- curl -f localhost:7860 || exit 1 # added
backend:
backend:
- llama
arch:
- arm64
Expand All @@ -87,11 +87,11 @@ application:
exec: # added
command: # added
- curl -f localhost:7860 || exit 1 # added
backend:
backend:
- llama
gpu-env:
- cuda
arch:
arch:
- amd64
```
Expand Down
9 changes: 4 additions & 5 deletions docs/proposals/state-management.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# State management

The backend manages and persists the State. The backend pushes new state to the front-end
The backend manages and persists the State. The backend pushes new state to the front-end
when changes happen, and the front-end can ask for the current value of the state.

The front-end uses `readable` stores to expose the state to the different pages. The store
The front-end uses `readable` stores to expose the state to the different pages. The store
listens for new states pushed by the backend (`onMessage`), and asks for the current state
at initial time.

Expand All @@ -14,7 +14,7 @@ The pages of the front-end subscribe to the store to get the value of the state
The catalog is persisted as a file in the user's filesystem. The backend reads the file at startup,
and watches the file for changes. The backend updates the state as soon as changes it detects changes.

The front-end uses a `readable` store, which waits for changes on the Catalog state
The front-end uses a `readable` store, which waits for changes on the Catalog state
(using `onMessage('new-catalog-state', data)`),
and asks for the current state at startup (with `postMessage('ask-catalog-state')`).

Expand All @@ -23,7 +23,7 @@ of the Catalog state in a reactive manner.

## Pulled applications

The front-end initiates the pulling of an application (using `postMessage('pull-application', app-id)`).
The front-end initiates the pulling of an application (using `postMessage('pull-application', app-id)`).

The backend manages and persists the state of the pulled applications and pushes every update
on the state (progression, etc.) (using `postMessage('new-pulled-application-state, app-id, data)`).
Expand All @@ -49,4 +49,3 @@ and asks for the current state at startup (using `postMessage('ask-error-state)`
The interested pages of the front-end subscribe to the store to display the errors related to the page.

The user can acknowledge an error (using a `postMessage('ack-error', id)`).

4 changes: 2 additions & 2 deletions packages/backend/src/templates/python-langchain.mustache
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
pip
=======
pip install langchain langchain-openai
pip install langchain langchain-openai

AiService.py
==============
Expand All @@ -10,7 +10,7 @@ from langchain_core.prompts import ChatPromptTemplate

model_service = "{{{ endpoint }}}"

llm = OpenAI(base_url=model_service,
llm = OpenAI(base_url=model_service,
api_key="sk-no-key-required",
streaming=True)
prompt = ChatPromptTemplate.from_messages([
Expand Down
2 changes: 0 additions & 2 deletions packages/backend/src/templates/quarkus-langchain4j.mustache
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,3 @@ String request(String question);

======
Inject AIService into REST resource or other CDI resource and use the request method to call the LLM model. That's it


0 comments on commit 5ff2948

Please sign in to comment.