From 6a3ca94b6616d41cf33f1a7845de7ec0db3905d8 Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Wed, 12 Jun 2024 17:00:56 +0200 Subject: [PATCH 1/8] feat: make recipes uses models based on backend property Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- packages/backend/src/assets/ai.json | 43 ++++------------- .../model/ModelColumnRecipeRecommended.svelte | 5 +- .../frontend/src/models/RecipeModelInfo.ts | 1 + packages/frontend/src/pages/Recipe.svelte | 15 ++++-- .../frontend/src/pages/RecipeModels.svelte | 47 +++++++++---------- packages/shared/src/models/IRecipe.ts | 2 +- 6 files changed, 47 insertions(+), 66 deletions(-) diff --git a/packages/backend/src/assets/ai.json b/packages/backend/src/assets/ai.json index d0de1db41..74c4090ca 100644 --- a/packages/backend/src/assets/ai.json +++ b/packages/backend/src/assets/ai.json @@ -12,18 +12,9 @@ ], "basedir": "recipes/natural_language_processing/chatbot", "readme": "# Chat Application\n\n This recipe helps developers start building their own custom LLM enabled chat applications. It consists of two main components: the Model Service and the AI Application.\n\n There are a few options today for local Model Serving, but this recipe will use [`llama-cpp-python`](https://github.com/abetlen/llama-cpp-python) and their OpenAI compatible Model Service. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/llamacpp_python/base/Containerfile`](/model_servers/llamacpp_python/base/Containerfile).\n\n The AI Application will connect to the Model Service via its OpenAI compatible API. The recipe relies on [Langchain's](https://python.langchain.com/docs/get_started/introduction) python package to simplify communication with the Model Service and uses [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the chat application below.\n\n![](/assets/chatbot_ui.png) \n\n\n## Try the Chat Application\n\nThe [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Chatbot` and follow the instructions to start the application.\n\n# Build the Application\n\nThe rest of this document will explain how to build and run the application from the terminal, and will\ngo into greater detail on how each container in the Pod above is built, run, and \nwhat purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.\n\n\nThis application requires a model, a model service and an AI inferencing application.\n\n* [Quickstart](#quickstart)\n* [Download a model](#download-a-model)\n* [Build the Model Service](#build-the-model-service)\n* [Deploy the Model Service](#deploy-the-model-service)\n* [Build the AI Application](#build-the-ai-application)\n* [Deploy the AI Application](#deploy-the-ai-application)\n* [Interact with the AI Application](#interact-with-the-ai-application)\n* [Embed the AI Application in a Bootable Container Image](#embed-the-ai-application-in-a-bootable-container-image)\n\n\n## Quickstart\nTo run the application with pre-built images from `quay.io/ai-lab`, use `make quadlet`. This command\nbuilds the application's metadata and generates Kubernetes YAML at `./build/chatbot.yaml` to spin up a Pod that can then be launched locally.\nTry it with:\n\n```\nmake quadlet\npodman kube play build/chatbot.yaml\n```\n\nThis will take a few minutes if the model and model-server container images need to be downloaded. \nThe Pod is named `chatbot`, so you may use [Podman](https://podman.io) to manage the Pod and its containers:\n\n```\npodman pod list\npodman ps\n```\n\nOnce the Pod and its containers are running, the application can be accessed at `http://localhost:8501`. \nPlease refer to the section below for more details about [interacting with the chatbot application](#interact-with-the-ai-application).\n\nTo stop and remove the Pod, run:\n\n```\npodman pod stop chatbot\npodman pod rm chatbot\n```\n\n## Download a model\n\nIf you are just getting started, we recommend using [granite-7b-lab](https://huggingface.co/instructlab/granite-7b-lab). This is a well\nperformant mid-sized model with an apache-2.0 license. In order to use it with our Model Service we need it converted\nand quantized into the [GGUF format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md). There are a number of\nways to get a GGUF version of granite-7b-lab, but the simplest is to download a pre-converted one from\n[huggingface.co](https://huggingface.co) here: https://huggingface.co/instructlab/granite-7b-lab-GGUF.\n\nThe recommended model can be downloaded using the code snippet below:\n\n```bash\ncd ../../../models\ncurl -sLO https://huggingface.co/instructlab/granite-7b-lab-GGUF/resolve/main/granite-7b-lab-Q4_K_M.gguf\ncd ../recipes/natural_language_processing/chatbot\n```\n\n_A full list of supported open models is forthcoming._ \n\n\n## Build the Model Service\n\nThe complete instructions for building and deploying the Model Service can be found in the\n[llamacpp_python model-service document](../../../model_servers/llamacpp_python/README.md).\n\nThe Model Service can be built from make commands from the [llamacpp_python directory](../../../model_servers/llamacpp_python/).\n\n```bash\n# from path model_servers/llamacpp_python from repo containers/ai-lab-recipes\nmake build\n```\nCheckout the [Makefile](../../../model_servers/llamacpp_python/Makefile) to get more details on different options for how to build.\n\n## Deploy the Model Service\n\nThe local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from `model_servers/llamacpp_python` set with reasonable defaults:\n\n```bash\n# from path model_servers/llamacpp_python from repo containers/ai-lab-recipes\nmake run\n```\n\n## Build the AI Application\n\nThe AI Application can be built from the make command:\n\n```bash\n# Run this from the current directory (path recipes/natural_language_processing/chatbot from repo containers/ai-lab-recipes)\nmake build\n```\n\n## Deploy the AI Application\n\nMake sure the Model Service is up and running before starting this container image. When starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`. This could be any appropriately hosted Model Service (running locally or in the cloud) using an OpenAI compatible API. In our case the Model Service is running inside the Podman machine so we need to provide it with the appropriate address `10.88.0.1`. To deploy the AI application use the following:\n\n```bash\n# Run this from the current directory (path recipes/natural_language_processing/chatbot from repo containers/ai-lab-recipes)\nmake run \n```\n\n## Interact with the AI Application\n\nEverything should now be up an running with the chat application available at [`http://localhost:8501`](http://localhost:8501). By using this recipe and getting this starting point established, users should now have an easier time customizing and building their own LLM enabled chatbot applications. \n\n## Embed the AI Application in a Bootable Container Image\n\nTo build a bootable container image that includes this sample chatbot workload as a service that starts when a system is booted, run: `make -f Makefile bootc`. You can optionally override the default image / tag you want to give the make command by specifying it as follows: `make -f Makefile BOOTC_IMAGE= bootc`.\n\nSubstituting the bootc/Containerfile FROM command is simple using the Makefile FROM option.\n\n```bash\nmake FROM=registry.redhat.io/rhel9/rhel-bootc:9.4 bootc\n```\n\nSelecting the ARCH for the bootc/Containerfile is simple using the Makefile ARCH= variable.\n\n```\nmake ARCH=x86_64 bootc\n```\n\nThe magic happens when you have a bootc enabled system running. If you do, and you'd like to update the operating system to the OS you just built\nwith the chatbot application, it's as simple as ssh-ing into the bootc system and running:\n\n```bash\nbootc switch quay.io/ai-lab/chatbot-bootc:latest\n```\n\nUpon a reboot, you'll see that the chatbot service is running on the system. Check on the service with:\n\n```bash\nssh user@bootc-system-ip\nsudo systemctl status chatbot\n```\n\n### What are bootable containers?\n\nWhat's a [bootable OCI container](https://containers.github.io/bootc/) and what's it got to do with AI?\n\nThat's a good question! We think it's a good idea to embed AI workloads (or any workload!) into bootable images at _build time_ rather than\nat _runtime_. This extends the benefits, such as portability and predictability, that containerizing applications provides to the operating system.\nBootable OCI images bake exactly what you need to run your workloads into the operating system at build time by using your favorite containerization\ntools. Might I suggest [podman](https://podman.io/)?\n\nOnce installed, a bootc enabled system can be updated by providing an updated bootable OCI image from any OCI\nimage registry with a single `bootc` command. This works especially well for fleets of devices that have fixed workloads - think\nfactories or appliances. Who doesn't want to add a little AI to their appliance, am I right?\n\nBootable images lend toward immutable operating systems, and the more immutable an operating system is, the less that can go wrong at runtime!\n\n#### Creating bootable disk images\n\nYou can convert a bootc image to a bootable disk image using the\n[quay.io/centos-bootc/bootc-image-builder](https://github.com/osbuild/bootc-image-builder) container image.\n\nThis container image allows you to build and deploy [multiple disk image types](../../common/README_bootc_image_builder.md) from bootc container images.\n\nDefault image types can be set via the DISK_TYPE Makefile variable.\n\n`make bootc-image-builder DISK_TYPE=ami`\n", - "models": [ + "recommended": [ "hf.instructlab.granite-7b-lab-GGUF", - "hf.instructlab.merlinite-7b-lab-GGUF", - "hf.TheBloke.mistral-7b-instruct-v0.2.Q4_K_M", - "hf.NousResearch.Hermes-2-Pro-Mistral-7B.Q4_K_M", - "hf.ibm.merlinite-7b-Q4_K_M", - "hf.froggeric.Cerebrum-1.0-7b-Q4_KS", - "hf.TheBloke.openchat-3.5-0106.Q4_K_M", - "hf.TheBloke.mistral-7b-openorca.Q4_K_M", - "hf.MaziyarPanahi.phi-2.Q4_K_M", - "hf.llmware.dragon-mistral-7b-q4_k_m", - "hf.MaziyarPanahi.MixTAO-7Bx2-MoE-Instruct-v7.0.Q4_K_M" + "hf.instructlab.merlinite-7b-lab-GGUF" ], "backend": "llama-cpp" }, @@ -39,18 +30,9 @@ ], "basedir": "recipes/natural_language_processing/summarizer", "readme": "# Text Summarizer Application\n\n This recipe helps developers start building their own custom LLM enabled summarizer applications. It consists of two main components: the Model Service and the AI Application.\n\n There are a few options today for local Model Serving, but this recipe will use [`llama-cpp-python`](https://github.com/abetlen/llama-cpp-python) and their OpenAI compatible Model Service. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/llamacpp_python/base/Containerfile`](/model_servers/llamacpp_python/base/Containerfile).\n\n The AI Application will connect to the Model Service via its OpenAI compatible API. The recipe relies on [Langchain's](https://python.langchain.com/docs/get_started/introduction) python package to simplify communication with the Model Service and uses [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the summarizer application below.\n\n![](/assets/summarizer_ui.png) \n\n\n## Try the Summarizer Application\n\nThe [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Summarizer` and follow the instructions to start the application.\n\n# Build the Application\n\nThe rest of this document will explain how to build and run the application from the terminal, and will\ngo into greater detail on how each container in the Pod above is built, run, and \nwhat purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.\n\n\nThis application requires a model, a model service and an AI inferencing application.\n\n* [Quickstart](#quickstart)\n* [Download a model](#download-a-model)\n* [Build the Model Service](#build-the-model-service)\n* [Deploy the Model Service](#deploy-the-model-service)\n* [Build the AI Application](#build-the-ai-application)\n* [Deploy the AI Application](#deploy-the-ai-application)\n* [Interact with the AI Application](#interact-with-the-ai-application)\n* [Embed the AI Application in a Bootable Container Image](#embed-the-ai-application-in-a-bootable-container-image)\n\n\n## Quickstart\nTo run the application with pre-built images from `quay.io/ai-lab`, use `make quadlet`. This command\nbuilds the application's metadata and generates Kubernetes YAML at `./build/summarizer.yaml` to spin up a Pod that can then be launched locally.\nTry it with:\n\n```\nmake quadlet\npodman kube play build/summarizer.yaml\n```\n\nThis will take a few minutes if the model and model-server container images need to be downloaded. \nThe Pod is named `summarizer`, so you may use [Podman](https://podman.io) to manage the Pod and its containers:\n\n```\npodman pod list\npodman ps\n```\n\nOnce the Pod and its containers are running, the application can be accessed at `http://localhost:8501`. \nPlease refer to the section below for more details about [interacting with the summarizer application](#interact-with-the-ai-application).\n\nTo stop and remove the Pod, run:\n\n```\npodman pod stop summarizer\npodman pod rm summarizer\n```\n\n## Download a model\n\nIf you are just getting started, we recommend using [granite-7b-lab](https://huggingface.co/instructlab/granite-7b-lab). This is a well\nperformant mid-sized model with an apache-2.0 license. In order to use it with our Model Service we need it converted\nand quantized into the [GGUF format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md). There are a number of\nways to get a GGUF version of granite-7b-lab, but the simplest is to download a pre-converted one from\n[huggingface.co](https://huggingface.co) here: https://huggingface.co/instructlab/granite-7b-lab-GGUF/blob/main/granite-7b-lab-Q4_K_M.gguf.\n\nThe recommended model can be downloaded using the code snippet below:\n\n```bash\ncd ../../../models\ncurl -sLO https://huggingface.co/instructlab/granite-7b-lab-GGUF/resolve/main/granite-7b-lab-Q4_K_M.gguf\ncd ../recipes/natural_language_processing/summarizer\n```\n\n_A full list of supported open models is forthcoming._ \n\n\n## Build the Model Service\n\nThe complete instructions for building and deploying the Model Service can be found in the\n[llamacpp_python model-service document](../../../model_servers/llamacpp_python/README.md).\n\nThe Model Service can be built from make commands from the [llamacpp_python directory](../../../model_servers/llamacpp_python/).\n\n```bash\n# from path model_servers/llamacpp_python from repo containers/ai-lab-recipes\nmake build\n```\nCheckout the [Makefile](../../../model_servers/llamacpp_python/Makefile) to get more details on different options for how to build.\n\n## Deploy the Model Service\n\nThe local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from `model_servers/llamacpp_python` set with reasonable defaults:\n\n```bash\n# from path model_servers/llamacpp_python from repo containers/ai-lab-recipes\nmake run\n```\n\n## Build the AI Application\n\nThe AI Application can be built from the make command:\n\n```bash\n# Run this from the current directory (path recipes/natural_language_processing/summarizer from repo containers/ai-lab-recipes)\nmake build\n```\n\n## Deploy the AI Application\n\nMake sure the Model Service is up and running before starting this container image. When starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`. This could be any appropriately hosted Model Service (running locally or in the cloud) using an OpenAI compatible API. In our case the Model Service is running inside the Podman machine so we need to provide it with the appropriate address `10.88.0.1`. To deploy the AI application use the following:\n\n```bash\n# Run this from the current directory (path recipes/natural_language_processing/summarizer from repo containers/ai-lab-recipes)\nmake run \n```\n\n## Interact with the AI Application\n\nEverything should now be up an running with the summarizer application available at [`http://localhost:8501`](http://localhost:8501). By using this recipe and getting this starting point established, users should now have an easier time customizing and building their own LLM enabled summarizer applications. \n\n## Embed the AI Application in a Bootable Container Image\n\nTo build a bootable container image that includes this sample summarizer workload as a service that starts when a system is booted, run: `make -f Makefile bootc`. You can optionally override the default image / tag you want to give the make command by specifying it as follows: `make -f Makefile BOOTC_IMAGE= bootc`.\n\nSubstituting the bootc/Containerfile FROM command is simple using the Makefile FROM option.\n\n```bash\nmake FROM=registry.redhat.io/rhel9/rhel-bootc:9.4 bootc\n```\n\nSelecting the ARCH for the bootc/Containerfile is simple using the Makefile ARCH= variable.\n\n```\nmake ARCH=x86_64 bootc\n```\n\nThe magic happens when you have a bootc enabled system running. If you do, and you'd like to update the operating system to the OS you just built\nwith the summarizer application, it's as simple as ssh-ing into the bootc system and running:\n\n```bash\nbootc switch quay.io/ai-lab/summarizer-bootc:latest\n```\n\nUpon a reboot, you'll see that the summarizer service is running on the system. Check on the service with:\n\n```bash\nssh user@bootc-system-ip\nsudo systemctl status summarizer\n```\n\n### What are bootable containers?\n\nWhat's a [bootable OCI container](https://containers.github.io/bootc/) and what's it got to do with AI?\n\nThat's a good question! We think it's a good idea to embed AI workloads (or any workload!) into bootable images at _build time_ rather than\nat _runtime_. This extends the benefits, such as portability and predictability, that containerizing applications provides to the operating system.\nBootable OCI images bake exactly what you need to run your workloads into the operating system at build time by using your favorite containerization\ntools. Might I suggest [podman](https://podman.io/)?\n\nOnce installed, a bootc enabled system can be updated by providing an updated bootable OCI image from any OCI\nimage registry with a single `bootc` command. This works especially well for fleets of devices that have fixed workloads - think\nfactories or appliances. Who doesn't want to add a little AI to their appliance, am I right?\n\nBootable images lend toward immutable operating systems, and the more immutable an operating system is, the less that can go wrong at runtime!\n\n#### Creating bootable disk images\n\nYou can convert a bootc image to a bootable disk image using the\n[quay.io/centos-bootc/bootc-image-builder](https://github.com/osbuild/bootc-image-builder) container image.\n\nThis container image allows you to build and deploy [multiple disk image types](../../common/README_bootc_image_builder.md) from bootc container images.\n\nDefault image types can be set via the DISK_TYPE Makefile variable.\n\n`make bootc-image-builder DISK_TYPE=ami`\n", - "models": [ + "recommended": [ "hf.instructlab.granite-7b-lab-GGUF", - "hf.instructlab.merlinite-7b-lab-GGUF", - "hf.TheBloke.mistral-7b-instruct-v0.2.Q4_K_M", - "hf.NousResearch.Hermes-2-Pro-Mistral-7B.Q4_K_M", - "hf.ibm.merlinite-7b-Q4_K_M", - "hf.froggeric.Cerebrum-1.0-7b-Q4_KS", - "hf.TheBloke.openchat-3.5-0106.Q4_K_M", - "hf.TheBloke.mistral-7b-openorca.Q4_K_M", - "hf.MaziyarPanahi.phi-2.Q4_K_M", - "hf.llmware.dragon-mistral-7b-q4_k_m", - "hf.MaziyarPanahi.MixTAO-7Bx2-MoE-Instruct-v7.0.Q4_K_M" + "hf.instructlab.merlinite-7b-lab-GGUF" ], "backend": "llama-cpp" }, @@ -66,18 +48,9 @@ ], "basedir": "recipes/natural_language_processing/codegen", "readme": "# Code Generation Application\n\n This recipe helps developers start building their own custom LLM enabled code generation applications. It consists of two main components: the Model Service and the AI Application.\n\n There are a few options today for local Model Serving, but this recipe will use [`llama-cpp-python`](https://github.com/abetlen/llama-cpp-python) and their OpenAI compatible Model Service. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/llamacpp_python/base/Containerfile`](/model_servers/llamacpp_python/base/Containerfile).\n\n The AI Application will connect to the Model Service via its OpenAI compatible API. The recipe relies on [Langchain's](https://python.langchain.com/docs/get_started/introduction) python package to simplify communication with the Model Service and uses [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the code generation application below.\n\n![](/assets/codegen_ui.png) \n\n\n## Try the Code Generation Application\n\nThe [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Code Generation` and follow the instructions to start the application.\n\n# Build the Application\n\nThe rest of this document will explain how to build and run the application from the terminal, and will\ngo into greater detail on how each container in the Pod above is built, run, and \nwhat purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.\n\n\nThis application requires a model, a model service and an AI inferencing application.\n\n* [Quickstart](#quickstart)\n* [Download a model](#download-a-model)\n* [Build the Model Service](#build-the-model-service)\n* [Deploy the Model Service](#deploy-the-model-service)\n* [Build the AI Application](#build-the-ai-application)\n* [Deploy the AI Application](#deploy-the-ai-application)\n* [Interact with the AI Application](#interact-with-the-ai-application)\n* [Embed the AI Application in a Bootable Container Image](#embed-the-ai-application-in-a-bootable-container-image)\n\n\n## Quickstart\nTo run the application with pre-built images from `quay.io/ai-lab`, use `make quadlet`. This command\nbuilds the application's metadata and generates Kubernetes YAML at `./build/codegen.yaml` to spin up a Pod that can then be launched locally.\nTry it with:\n\n```\nmake quadlet\npodman kube play build/codegen.yaml\n```\n\nThis will take a few minutes if the model and model-server container images need to be downloaded. \nThe Pod is named `codegen`, so you may use [Podman](https://podman.io) to manage the Pod and its containers:\n\n```\npodman pod list\npodman ps\n```\n\nOnce the Pod and its containers are running, the application can be accessed at `http://localhost:8501`. \nPlease refer to the section below for more details about [interacting with the codegen application](#interact-with-the-ai-application).\n\nTo stop and remove the Pod, run:\n\n```\npodman pod stop codegen\npodman pod rm codgen\n```\n\n## Download a model\n\nIf you are just getting started, we recommend using [Mistral-7B-code-16k-qlora](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora). This is a well\nperformant mid-sized model with an apache-2.0 license fine tuned for code generation. In order to use it with our Model Service we need it converted\nand quantized into the [GGUF format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md). There are a number of\nways to get a GGUF version of Mistral-7B-code-16k-qlora, but the simplest is to download a pre-converted one from\n[huggingface.co](https://huggingface.co) here: https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GGUF.\n\nThere are a number of options for quantization level, but we recommend `Q4_K_M`. \n\nThe recommended model can be downloaded using the code snippet below:\n\n```bash\ncd ../../../models\ncurl -sLO https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GGUF/resolve/main/mistral-7b-code-16k-qlora.Q4_K_M.gguf\ncd ../recipes/natural_language_processing/codgen\n```\n\n_A full list of supported open models is forthcoming._ \n\n\n## Build the Model Service\n\nThe complete instructions for building and deploying the Model Service can be found in the\n[llamacpp_python model-service document](../../../model_servers/llamacpp_python/README.md).\n\nThe Model Service can be built from make commands from the [llamacpp_python directory](../../../model_servers/llamacpp_python/).\n\n```bash\n# from path model_servers/llamacpp_python from repo containers/ai-lab-recipes\nmake build\n```\nCheckout the [Makefile](../../../model_servers/llamacpp_python/Makefile) to get more details on different options for how to build.\n\n## Deploy the Model Service\n\nThe local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from `model_servers/llamacpp_python` set with reasonable defaults:\n\n```bash\n# from path model_servers/llamacpp_python from repo containers/ai-lab-recipes\nmake run\n```\n\n## Build the AI Application\n\nThe AI Application can be built from the make command:\n\n```bash\n# Run this from the current directory (path recipes/natural_language_processing/codegen from repo containers/ai-lab-recipes)\nmake build\n```\n\n## Deploy the AI Application\n\nMake sure the Model Service is up and running before starting this container image. When starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`. This could be any appropriately hosted Model Service (running locally or in the cloud) using an OpenAI compatible API. In our case the Model Service is running inside the Podman machine so we need to provide it with the appropriate address `10.88.0.1`. To deploy the AI application use the following:\n\n```bash\n# Run this from the current directory (path recipes/natural_language_processing/codegen from repo containers/ai-lab-recipes)\nmake run \n```\n\n## Interact with the AI Application\n\nEverything should now be up an running with the code generation application available at [`http://localhost:8501`](http://localhost:8501). By using this recipe and getting this starting point established, users should now have an easier time customizing and building their own LLM enabled code generation applications. \n\n## Embed the AI Application in a Bootable Container Image\n\nTo build a bootable container image that includes this sample code generation workload as a service that starts when a system is booted, run: `make -f Makefile bootc`. You can optionally override the default image / tag you want to give the make command by specifying it as follows: `make -f Makefile BOOTC_IMAGE= bootc`.\n\nSubstituting the bootc/Containerfile FROM command is simple using the Makefile FROM option.\n\n```bash\nmake FROM=registry.redhat.io/rhel9/rhel-bootc:9.4 bootc\n```\n\nSelecting the ARCH for the bootc/Containerfile is simple using the Makefile ARCH= variable.\n\n```\nmake ARCH=x86_64 bootc\n```\n\nThe magic happens when you have a bootc enabled system running. If you do, and you'd like to update the operating system to the OS you just built\nwith the code generation application, it's as simple as ssh-ing into the bootc system and running:\n\n```bash\nbootc switch quay.io/ai-lab/codegen-bootc:latest\n```\n\nUpon a reboot, you'll see that the codegen service is running on the system. Check on the service with:\n\n```bash\nssh user@bootc-system-ip\nsudo systemctl status codegen\n```\n\n### What are bootable containers?\n\nWhat's a [bootable OCI container](https://containers.github.io/bootc/) and what's it got to do with AI?\n\nThat's a good question! We think it's a good idea to embed AI workloads (or any workload!) into bootable images at _build time_ rather than\nat _runtime_. This extends the benefits, such as portability and predictability, that containerizing applications provides to the operating system.\nBootable OCI images bake exactly what you need to run your workloads into the operating system at build time by using your favorite containerization\ntools. Might I suggest [podman](https://podman.io/)?\n\nOnce installed, a bootc enabled system can be updated by providing an updated bootable OCI image from any OCI\nimage registry with a single `bootc` command. This works especially well for fleets of devices that have fixed workloads - think\nfactories or appliances. Who doesn't want to add a little AI to their appliance, am I right?\n\nBootable images lend toward immutable operating systems, and the more immutable an operating system is, the less that can go wrong at runtime!\n\n#### Creating bootable disk images\n\nYou can convert a bootc image to a bootable disk image using the\n[quay.io/centos-bootc/bootc-image-builder](https://github.com/osbuild/bootc-image-builder) container image.\n\nThis container image allows you to build and deploy [multiple disk image types](../../common/README_bootc_image_builder.md) from bootc container images.\n\nDefault image types can be set via the DISK_TYPE Makefile variable.\n\n`make bootc-image-builder DISK_TYPE=ami`\n", - "models": [ + "recommended": [ "hf.TheBloke.mistral-7b-code-16k-qlora.Q4_K_M", - "hf.TheBloke.mistral-7b-codealpaca-lora.Q4_K_M", - "hf.TheBloke.mistral-7b-instruct-v0.2.Q4_K_M", - "hf.NousResearch.Hermes-2-Pro-Mistral-7B.Q4_K_M", - "hf.ibm.merlinite-7b-Q4_K_M", - "hf.froggeric.Cerebrum-1.0-7b-Q4_KS", - "hf.TheBloke.openchat-3.5-0106.Q4_K_M", - "hf.TheBloke.mistral-7b-openorca.Q4_K_M", - "hf.MaziyarPanahi.phi-2.Q4_K_M", - "hf.llmware.dragon-mistral-7b-q4_k_m", - "hf.MaziyarPanahi.MixTAO-7Bx2-MoE-Instruct-v7.0.Q4_K_M" + "hf.TheBloke.mistral-7b-codealpaca-lora.Q4_K_M" ], "backend": "llama-cpp" }, @@ -93,7 +66,7 @@ ], "basedir": "recipes/audio/audio_to_text", "readme": "# Audio to Text Application\n\nThis recipe helps developers start building their own custom AI enabled audio transcription applications. It consists of two main components: the Model Service and the AI Application.\n\nThere are a few options today for local Model Serving, but this recipe will use [`whisper-cpp`](https://github.com/ggerganov/whisper.cpp.git) and its included Model Service. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/whispercpp/base/Containerfile`](/model_servers/whispercpp/base/Containerfile).\n\nThe AI Application will connect to the Model Service via an API. The recipe relies on [Langchain's](https://python.langchain.com/docs/get_started/introduction) python package to simplify communication with the Model Service and uses [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the audio to text application below.\n\n\n![](/assets/whisper.png) \n\n## Try the Audio to Text Application:\n\nThe [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Audio to Text` and follow the instructions to start the application.\n\n# Build the Application\n\nThe rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the application above is built, run, and what purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.\n\n* [Download a model](#download-a-model)\n* [Build the Model Service](#build-the-model-service)\n* [Deploy the Model Service](#deploy-the-model-service)\n* [Build the AI Application](#build-the-ai-application)\n* [Deploy the AI Application](#deploy-the-ai-application)\n* [Interact with the AI Application](#interact-with-the-ai-application)\n * [Input audio files](#input-audio-files)\n\n## Download a model\n\nIf you are just getting started, we recommend using [ggerganov/whisper.cpp](https://huggingface.co/ggerganov/whisper.cpp).\nThis is a well performant model with an MIT license.\nIt's simple to download a pre-converted whisper model from [huggingface.co](https://huggingface.co)\nhere: https://huggingface.co/ggerganov/whisper.cpp. There are a number of options, but we recommend to start with `ggml-small.bin`.\n\nThe recommended model can be downloaded using the code snippet below:\n\n```bash\ncd ../../../models\ncurl -sLO https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.bin\ncd ../recipes/audio/audio_to_text\n```\n\n_A full list of supported open models is forthcoming._\n\n\n## Build the Model Service\n\nThe complete instructions for building and deploying the Model Service can be found in the [whispercpp model-service document](../../../model_servers/whispercpp/README.md).\n\n```bash\n# from path model_servers/whispercpp from repo containers/ai-lab-recipes\nmake build\n```\nCheckout the [Makefile](../../../model_servers/whispercpp/Makefile) to get more details on different options for how to build.\n\n## Deploy the Model Service\n\nThe local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from `model_servers/whispercpp` set with reasonable defaults:\n\n```bash\n# from path model_servers/whispercpp from repo containers/ai-lab-recipes\nmake run\n```\n\n## Build the AI Application\n\nNow that the Model Service is running we want to build and deploy our AI Application. Use the provided Containerfile to build the AI Application\nimage from the [`audio-to-text/`](./) directory.\n\n```bash\n# from path recipes/audio/audio_to_text from repo containers/ai-lab-recipes\npodman build -t audio-to-text app\n```\n### Deploy the AI Application\n\nMake sure the Model Service is up and running before starting this container image.\nWhen starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`.\nThis could be any appropriately hosted Model Service (running locally or in the cloud) using a compatible API.\nThe following Podman command can be used to run your AI Application:\n\n```bash\npodman run --rm -it -p 8501:8501 -e MODEL_ENDPOINT=http://10.88.0.1:8001/inference audio-to-text \n```\n\n### Interact with the AI Application\n\nOnce the streamlit application is up and running, you should be able to access it at `http://localhost:8501`.\nFrom here, you can upload audio files from your local machine and translate the audio files as shown below.\n\nBy using this recipe and getting this starting point established,\nusers should now have an easier time customizing and building their own AI enabled applications.\n\n#### Input audio files\n\nWhisper.cpp requires as an input 16-bit WAV audio files.\nTo convert your input audio files to 16-bit WAV format you can use `ffmpeg` like this:\n\n```bash\nffmpeg -i -ar 16000 -ac 1 -c:a pcm_s16le \n```\n", - "models": [ + "recommended": [ "hf.ggerganov.whisper.cpp" ], "backend": "whisper-cpp" @@ -110,7 +83,7 @@ ], "basedir": "recipes/computer_vision/object_detection", "readme": "# Object Detection\n\nThis recipe helps developers start building their own custom AI enabled object detection applications. It consists of two main components: the Model Service and the AI Application.\n\nThere are a few options today for local Model Serving, but this recipe will use our FastAPI [`object_detection_python`](../../../model_servers/object_detection_python/src/object_detection_server.py) model server. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/object_detection_python/base/Containerfile`](/model_servers/object_detection_python/base/Containerfile).\n\nThe AI Application will connect to the Model Service via an API. The recipe relies on [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the object detection application below.\n\n![](/assets/object_detection.png) \n\n## Try the Object Detection Application:\n\nThe [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Object Detection` and follow the instructions to start the application.\n\n# Build the Application\n\nThe rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the application above is built, run, and what purpose it serves in the overall application. All the Model Server elements of the recipe use a central Model Server [Makefile](../../../model_servers/common/Makefile.common) that includes variables populated with default values to simplify getting started. Currently we do not have a Makefile for the Application elements of the Recipe, but this coming soon, and will leverage the recipes common [Makefile](../../common/Makefile.common) to provide variable configuration and reasonable defaults to this Recipe's application.\n\n* [Download a model](#download-a-model)\n* [Build the Model Service](#build-the-model-service)\n* [Deploy the Model Service](#deploy-the-model-service)\n* [Build the AI Application](#build-the-ai-application)\n* [Deploy the AI Application](#deploy-the-ai-application)\n* [Interact with the AI Application](#interact-with-the-ai-application)\n\n## Download a model\n\nIf you are just getting started, we recommend using [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101).\nThis is a well performant model with an Apache-2.0 license.\nIt's simple to download a copy of the model from [huggingface.co](https://huggingface.co)\n\nYou can use the `download-model-facebook-detr-resnet-101` make target in the `model_servers/object_detection_python` directory to download and move the model into the models directory for you:\n\n```bash\n# from path model_servers/object_detection_python from repo containers/ai-lab-recipes\n make download-model-facebook-detr-resnet-101\n```\n\n## Build the Model Service\n\nThe You can build the Model Service from the [object_detection_python model-service directory](../../../model_servers/object_detection_python).\n\n```bash\n# from path model_servers/object_detection_python from repo containers/ai-lab-recipes\nmake build\n```\n\nCheckout the [Makefile](../../../model_servers/object_detection_python/Makefile) to get more details on different options for how to build.\n\n## Deploy the Model Service\n\nThe local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from the [`model_servers/object_detection_python`](../../../model_servers/object_detection_python) directory, which will be set with reasonable defaults:\n\n```bash\n# from path model_servers/object_detection_python from repo containers/ai-lab-recipes\nmake run\n```\n\nAs stated above, by default the model service will use [`facebook/detr-resnet-101`](https://huggingface.co/facebook/detr-resnet-101). However you can use other compatabale models. Simply pass the new `MODEL_NAME` and `MODEL_PATH` to the make command. Make sure the model is downloaded and exists in the [models directory](../../../models/):\n\n```bash\n# from path model_servers/object_detection_python from repo containers/ai-lab-recipes\nmake MODEL_NAME=facebook/detr-resnet-50 MODEL_PATH=/models/facebook/detr-resnet-50 run\n```\n\n## Build the AI Application\n\nNow that the Model Service is running we want to build and deploy our AI Application. Use the provided Containerfile to build the AI Application\nimage from the [`object_detection/`](./) recipe directory.\n\n```bash\n# from path recipes/computer_vision/object_detection from repo containers/ai-lab-recipes\npodman build -t object_detection_client .\n```\n\n### Deploy the AI Application\n\nMake sure the Model Service is up and running before starting this container image.\nWhen starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`.\nThis could be any appropriately hosted Model Service (running locally or in the cloud) using a compatible API.\nThe following Podman command can be used to run your AI Application:\n\n```bash\npodman run -p 8501:8501 -e MODEL_ENDPOINT=http://10.88.0.1:8000/detection object_detection_client\n```\n\n### Interact with the AI Application\n\nOnce the client is up a running, you should be able to access it at `http://localhost:8501`. From here you can upload images from your local machine and detect objects in the image as shown below. \n\nBy using this recipe and getting this starting point established,\nusers should now have an easier time customizing and building their own AI enabled applications.\n", - "models": [ + "recommended": [ "hf.facebook.detr-resnet-101" ], "backend": "none" diff --git a/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.svelte b/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.svelte index 647123751..99080967f 100644 --- a/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.svelte +++ b/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.svelte @@ -1,8 +1,7 @@ -{#if object.recommended} +{#if object} {/if} diff --git a/packages/frontend/src/models/RecipeModelInfo.ts b/packages/frontend/src/models/RecipeModelInfo.ts index b2e017953..8031a3ac7 100644 --- a/packages/frontend/src/models/RecipeModelInfo.ts +++ b/packages/frontend/src/models/RecipeModelInfo.ts @@ -18,6 +18,7 @@ import type { ModelInfo } from '@shared/src/models/IModelInfo'; +// todo :remove export interface RecipeModelInfo extends ModelInfo { recommended: boolean; inUse: boolean; diff --git a/packages/frontend/src/pages/Recipe.svelte b/packages/frontend/src/pages/Recipe.svelte index 19eb6c2e8..7f1e26d3b 100644 --- a/packages/frontend/src/pages/Recipe.svelte +++ b/packages/frontend/src/pages/Recipe.svelte @@ -15,14 +15,19 @@ import ContainerConnectionStatusInfo from '../lib/notification/ContainerConnecti import { modelsInfo } from '../stores/modelsInfo'; import { checkContainerConnectionStatus } from '../utils/connectionUtils'; import { router } from 'tinro'; +import { InferenceType } from '@shared/src/models/IInference'; +import type { ModelInfo } from '@shared/src/models/IModelInfo'; export let recipeId: string; // The recipe model provided $: recipe = $catalog.recipes.find(r => r.id === recipeId); $: categories = $catalog.categories; + +// model selected to start the recipe let selectedModelId: string; -$: selectedModelId = recipe?.models?.[0] ?? ''; +$: selectedModelId = (recipe?.recommended && recipe.recommended.length > 0) ? recipe?.recommended?.[0] : ''; + let connectionInfo: ContainerConnectionInfo | undefined; $: if ($modelsInfo && selectedModelId) { checkContainerConnectionStatus($modelsInfo, selectedModelId, 'recipe') @@ -30,6 +35,9 @@ $: if ($modelsInfo && selectedModelId) { .catch((e: unknown) => console.log(String(e))); } +let models: ModelInfo[]; +$: models = $catalog.models.filter(model => (model.backend ?? InferenceType.NONE) === (recipe?.backend ?? InferenceType.NONE)); + // Send recipe info to telemetry let recipeTelemetry: string | undefined = undefined; $: if (recipe && recipe.id !== recipeTelemetry) { @@ -68,8 +76,9 @@ function setSelectedModel(modelId: string) { diff --git a/packages/frontend/src/pages/RecipeModels.svelte b/packages/frontend/src/pages/RecipeModels.svelte index 07edd60c8..2f32d5662 100644 --- a/packages/frontend/src/pages/RecipeModels.svelte +++ b/packages/frontend/src/pages/RecipeModels.svelte @@ -1,46 +1,45 @@ -{#if models} -
-
-
- -
-
+
+
+
+ -{/if} + diff --git a/packages/shared/src/models/IRecipe.ts b/packages/shared/src/models/IRecipe.ts index 15d56df19..cda534550 100644 --- a/packages/shared/src/models/IRecipe.ts +++ b/packages/shared/src/models/IRecipe.ts @@ -26,7 +26,7 @@ export interface Recipe { ref?: string; readme: string; basedir?: string; - models?: string[]; + recommended?: string[]; /** * The backend field aims to target which inference * server the recipe requires From 87ded992ad083aa2c21babb445f5ae7d304f7dde Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Wed, 12 Jun 2024 17:16:01 +0200 Subject: [PATCH 2/8] fix: recipe models tests Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- packages/frontend/src/pages/RecipeModels.spec.ts | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/packages/frontend/src/pages/RecipeModels.spec.ts b/packages/frontend/src/pages/RecipeModels.spec.ts index 923f8cee8..d8f3df517 100644 --- a/packages/frontend/src/pages/RecipeModels.spec.ts +++ b/packages/frontend/src/pages/RecipeModels.spec.ts @@ -37,7 +37,7 @@ beforeEach(() => { { id: 'recipe1', name: 'Recipe 1', - models: ['model1'], + recommended: ['model1'], categories: [], description: 'Recipe 1', readme: '', @@ -63,9 +63,19 @@ beforeEach(() => { test('should display model icon', async () => { render(RecipeModels, { - selectedModelId: 'model1', + recommended: [], + selected: 'model1', setSelectedModel: vi.fn(), - modelsIds: ['model1'], + models: [{ + id: 'model1', + name: 'Model 1', + url: 'https://podman-desktop.io', + registry: 'Podman Desktop', + license: 'Apache 2.0', + description: '', + hw: 'CPU', + memory: 4 * 1024 * 1024 * 1024, + }], }); await waitFor(async () => { From 3a6658fa7be66b2aa52e922ec233bf1724d92527 Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Wed, 12 Jun 2024 17:19:06 +0200 Subject: [PATCH 3/8] fix: prettier&linter Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- packages/frontend/src/pages/Recipe.svelte | 6 +++-- .../frontend/src/pages/RecipeModels.spec.ts | 22 ++++++++++--------- .../frontend/src/pages/RecipeModels.svelte | 17 +++++++------- 3 files changed, 24 insertions(+), 21 deletions(-) diff --git a/packages/frontend/src/pages/Recipe.svelte b/packages/frontend/src/pages/Recipe.svelte index 7f1e26d3b..a9741f622 100644 --- a/packages/frontend/src/pages/Recipe.svelte +++ b/packages/frontend/src/pages/Recipe.svelte @@ -26,7 +26,7 @@ $: categories = $catalog.categories; // model selected to start the recipe let selectedModelId: string; -$: selectedModelId = (recipe?.recommended && recipe.recommended.length > 0) ? recipe?.recommended?.[0] : ''; +$: selectedModelId = recipe?.recommended && recipe.recommended.length > 0 ? recipe?.recommended?.[0] : ''; let connectionInfo: ContainerConnectionInfo | undefined; $: if ($modelsInfo && selectedModelId) { @@ -36,7 +36,9 @@ $: if ($modelsInfo && selectedModelId) { } let models: ModelInfo[]; -$: models = $catalog.models.filter(model => (model.backend ?? InferenceType.NONE) === (recipe?.backend ?? InferenceType.NONE)); +$: models = $catalog.models.filter( + model => (model.backend ?? InferenceType.NONE) === (recipe?.backend ?? InferenceType.NONE), +); // Send recipe info to telemetry let recipeTelemetry: string | undefined = undefined; diff --git a/packages/frontend/src/pages/RecipeModels.spec.ts b/packages/frontend/src/pages/RecipeModels.spec.ts index d8f3df517..593576433 100644 --- a/packages/frontend/src/pages/RecipeModels.spec.ts +++ b/packages/frontend/src/pages/RecipeModels.spec.ts @@ -66,16 +66,18 @@ test('should display model icon', async () => { recommended: [], selected: 'model1', setSelectedModel: vi.fn(), - models: [{ - id: 'model1', - name: 'Model 1', - url: 'https://podman-desktop.io', - registry: 'Podman Desktop', - license: 'Apache 2.0', - description: '', - hw: 'CPU', - memory: 4 * 1024 * 1024 * 1024, - }], + models: [ + { + id: 'model1', + name: 'Model 1', + url: 'https://podman-desktop.io', + registry: 'Podman Desktop', + license: 'Apache 2.0', + description: '', + hw: 'CPU', + memory: 4 * 1024 * 1024 * 1024, + }, + ], }); await waitFor(async () => { diff --git a/packages/frontend/src/pages/RecipeModels.svelte b/packages/frontend/src/pages/RecipeModels.svelte index 2f32d5662..bf815dcc1 100644 --- a/packages/frontend/src/pages/RecipeModels.svelte +++ b/packages/frontend/src/pages/RecipeModels.svelte @@ -11,20 +11,19 @@ export let recommended: string[]; export let selected: string; export let setSelectedModel: (modelId: string) => void; -$: models = models - .map((m, i) => { - return { - ...m, - inUse: m.id === selected, - }; - }); +$: models = models.map((m, i) => { + return { + ...m, + inUse: m.id === selected, + }; +}); const columns = [ new TableColumn('', { width: '20px', renderer: ModelColumnRecipeSelection }), new TableColumn('', { width: '20px', renderer: ModelColumnRecipeRecommended, - renderMapping: (object) => recommended.includes(object.id), + renderMapping: object => recommended.includes(object.id), }), new TableColumn('', { width: '32px', renderer: ModelColumnIcon }), new TableColumn('Name', { width: '4fr', renderer: ModelColumnName }), @@ -39,7 +38,7 @@ function setModelToUse(selected: ModelInfo) {
-
+
From 65dacd9c1f2824b143573f5ddeed62f846890020 Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Wed, 12 Jun 2024 17:33:03 +0200 Subject: [PATCH 4/8] fix: recommended column tests Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- .../model/ModelColumnRecipeRecommended.spec.ts | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.spec.ts b/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.spec.ts index ae34a5a14..c14cf2f64 100644 --- a/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.spec.ts +++ b/packages/frontend/src/lib/table/model/ModelColumnRecipeRecommended.spec.ts @@ -19,16 +19,11 @@ import '@testing-library/jest-dom/vitest'; import { test, expect } from 'vitest'; import { screen, render } from '@testing-library/svelte'; -import type { RecipeModelInfo } from '/@/models/RecipeModelInfo'; import ModelColumnRecipeRecommended from './ModelColumnRecipeRecommended.svelte'; test('expect the star icon to be rendered whn recipe is recommended', async () => { render(ModelColumnRecipeRecommended, { - object: { - id: 'id', - inUse: false, - recommended: true, - } as RecipeModelInfo, + object: true, }); const starIcon = screen.getByTitle('Recommended model'); @@ -37,11 +32,7 @@ test('expect the star icon to be rendered whn recipe is recommended', async () = test('expect nothing when recipe is NOT recommended', async () => { render(ModelColumnRecipeRecommended, { - object: { - id: 'id', - inUse: false, - recommended: false, - } as RecipeModelInfo, + object: false, }); const starIcon = screen.queryByTitle('Recommended model'); From 2035239d79ff5f4ea93321c41d8b6361da5f6002 Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Wed, 12 Jun 2024 17:55:29 +0200 Subject: [PATCH 5/8] fix: typecheck Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- packages/frontend/src/lib/RecipeDetails.spec.ts | 2 +- .../frontend/src/lib/table/application/ColumnRecipe.spec.ts | 2 +- packages/frontend/src/pages/Recipe.spec.ts | 4 ++-- packages/frontend/src/pages/Recipes.spec.ts | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/packages/frontend/src/lib/RecipeDetails.spec.ts b/packages/frontend/src/lib/RecipeDetails.spec.ts index 50231e2b9..c667b6eac 100644 --- a/packages/frontend/src/lib/RecipeDetails.spec.ts +++ b/packages/frontend/src/lib/RecipeDetails.spec.ts @@ -109,7 +109,7 @@ const initialCatalog: ApplicationCatalog = { name: 'Recipe 1', readme: 'readme 1', categories: [], - models: ['model1', 'model2'], + recommended: ['model1', 'model2'], description: 'description 1', repository: 'repo 1', }, diff --git a/packages/frontend/src/lib/table/application/ColumnRecipe.spec.ts b/packages/frontend/src/lib/table/application/ColumnRecipe.spec.ts index eff9c8f9c..c0e952ae8 100644 --- a/packages/frontend/src/lib/table/application/ColumnRecipe.spec.ts +++ b/packages/frontend/src/lib/table/application/ColumnRecipe.spec.ts @@ -54,7 +54,7 @@ const initialCatalog: ApplicationCatalog = { name: 'Recipe 1', readme: 'readme 1', categories: [], - models: ['model1', 'model2'], + recommended: ['model1', 'model2'], description: 'description 1', repository: 'repo 1', }, diff --git a/packages/frontend/src/pages/Recipe.spec.ts b/packages/frontend/src/pages/Recipe.spec.ts index 7d73b5fcf..d4b2e2960 100644 --- a/packages/frontend/src/pages/Recipe.spec.ts +++ b/packages/frontend/src/pages/Recipe.spec.ts @@ -111,7 +111,7 @@ const initialCatalog: ApplicationCatalog = { name: 'Recipe 1', readme: 'readme 1', categories: [], - models: ['model1', 'model2'], + recommended: ['model1', 'model2'], description: 'description 1', repository: 'repo 1', }, @@ -156,7 +156,7 @@ const updatedCatalog: ApplicationCatalog = { name: 'New Recipe Name', readme: 'readme 1', categories: [], - models: ['model1', 'model2'], + recommended: ['model1', 'model2'], description: 'description 1', repository: 'repo 1', }, diff --git a/packages/frontend/src/pages/Recipes.spec.ts b/packages/frontend/src/pages/Recipes.spec.ts index 7b09e914d..465d95c92 100644 --- a/packages/frontend/src/pages/Recipes.spec.ts +++ b/packages/frontend/src/pages/Recipes.spec.ts @@ -50,7 +50,7 @@ beforeEach(() => { { id: 'recipe1', name: 'Recipe 1', - models: ['model1'], + recommended: ['model1'], categories: [], description: 'Recipe 1', readme: '', @@ -59,7 +59,7 @@ beforeEach(() => { { id: 'recipe2', name: 'Recipe 2', - models: ['model2'], + recommended: ['model2'], categories: ['dummy-category'], description: 'Recipe 2', readme: '', From c3a153227ef0eddccdc6fe7f002d02f621777312 Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Wed, 12 Jun 2024 18:02:22 +0200 Subject: [PATCH 6/8] tests: remove old recommendation tests Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- .../frontend/src/lib/RecipeDetails.spec.ts | 30 ------------------- 1 file changed, 30 deletions(-) diff --git a/packages/frontend/src/lib/RecipeDetails.spec.ts b/packages/frontend/src/lib/RecipeDetails.spec.ts index c667b6eac..a799f2e06 100644 --- a/packages/frontend/src/lib/RecipeDetails.spec.ts +++ b/packages/frontend/src/lib/RecipeDetails.spec.ts @@ -183,36 +183,6 @@ test('swap model panel should be hidden on models tab', async () => { expect(swapModelPanel2.classList.contains('hidden')); }); -test('should display default model information when model is the recommended', async () => { - mocks.getApplicationsStateMock.mockResolvedValue([]); - vi.mocked(catalogStore).catalog = readable(initialCatalog); - render(RecipeDetails, { - recipeId: 'recipe 1', - modelId: 'model1', - }); - - const modelInfo = screen.getByLabelText('model-selected'); - expect(modelInfo.textContent).equal('Model 1'); - const licenseBadge = screen.getByLabelText('license-model'); - expect(licenseBadge.textContent).equal('?'); - const defaultWarning = screen.getByLabelText('model-warning'); - expect(defaultWarning.textContent).contains('This is the default, recommended model for this recipe.'); -}); - -test('should display non-default model information when model is not the recommended one', async () => { - mocks.getApplicationsStateMock.mockResolvedValue([]); - vi.mocked(catalogStore).catalog = readable(initialCatalog); - render(RecipeDetails, { - recipeId: 'recipe 1', - modelId: 'model2', - }); - - const modelInfo = screen.getByLabelText('model-selected'); - expect(modelInfo.textContent).equal('Model 2'); - const defaultWarning = screen.getByLabelText('model-warning'); - expect(defaultWarning.textContent).contains('The default model for this recipe is Model 1'); -}); - test('button vs code should be visible if local repository is not empty', async () => { mocks.getApplicationsStateMock.mockResolvedValue([]); mocks.getLocalRepositoriesMock.mockReturnValue([ From 183ec02b8ce78ee64a77502eafff061950444c0b Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Thu, 13 Jun 2024 16:01:11 +0200 Subject: [PATCH 7/8] fix: svelte check Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- packages/frontend/src/lib/RecipeDetails.svelte | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/packages/frontend/src/lib/RecipeDetails.svelte b/packages/frontend/src/lib/RecipeDetails.svelte index af5e2004d..4ec91e5ff 100644 --- a/packages/frontend/src/lib/RecipeDetails.svelte +++ b/packages/frontend/src/lib/RecipeDetails.svelte @@ -135,7 +135,7 @@ const deleteLocalClone = () => {
{model?.name} - {#if recipe?.models?.[0] === model.id} + {#if recipe?.recommended?.[0] === model.id} {/if}
@@ -150,15 +150,15 @@ const deleteLocalClone = () => { {/if}
- {#if recipe?.models?.[0] === model.id} + {#if recipe?.recommended?.[0] === model.id} * This is the default, recommended model for this recipe. You can swap for a different compatible model. {:else} - * The default model for this recipe is {findModel(recipe?.models?.[0])?.name}. You can + * The default model for this recipe is {findModel(recipe?.recommended?.[0])?.name}. You can swap for {findModel(recipe?.models?.[0])?.name} or a different compatible modelswap for {findModel(recipe?.recommended?.[0])?.name} or a different compatible model. {/if}
From d7cd435958096c3029f170309554c776b5586883 Mon Sep 17 00:00:00 2001 From: axel7083 <42176370+axel7083@users.noreply.github.com> Date: Fri, 14 Jun 2024 11:08:42 +0200 Subject: [PATCH 8/8] fix: multiple models recommendation Signed-off-by: axel7083 <42176370+axel7083@users.noreply.github.com> --- packages/frontend/src/lib/RecipeDetails.svelte | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/packages/frontend/src/lib/RecipeDetails.svelte b/packages/frontend/src/lib/RecipeDetails.svelte index 4ec91e5ff..39d3f9c2a 100644 --- a/packages/frontend/src/lib/RecipeDetails.svelte +++ b/packages/frontend/src/lib/RecipeDetails.svelte @@ -135,7 +135,7 @@ const deleteLocalClone = () => {
{model?.name} - {#if recipe?.recommended?.[0] === model.id} + {#if recipe?.recommended?.includes(model.id)} {/if}
@@ -150,16 +150,14 @@ const deleteLocalClone = () => { {/if}
- {#if recipe?.recommended?.[0] === model.id} - * This is the default, recommended model for this recipe. You can swap for a different compatible model. {:else} - * The default model for this recipe is {findModel(recipe?.recommended?.[0])?.name}. You can - swap for {findModel(recipe?.recommended?.[0])?.name} or a different compatible model. + * This is not a recommended model. You can + swap in the models section. {/if}