From 1452a5d40dad7e5059fef1bdf00679c301cdb8f3 Mon Sep 17 00:00:00 2001 From: Michael Clifford Date: Wed, 1 May 2024 14:12:28 -0400 Subject: [PATCH] small text updates to converter readme Signed-off-by: Michael Clifford --- convert_models/README.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/convert_models/README.md b/convert_models/README.md index 2d6bd94b3..1043dbaa5 100644 --- a/convert_models/README.md +++ b/convert_models/README.md @@ -1,12 +1,12 @@ # Convert and Quantize Models -Locallm currently relies on [llamacpp](https://github.com/ggerganov/llama.cpp) for its model service backend. Llamacpp requires that model be in a `*.gguf` format. +AI Lab Recipes' default model server is [llamacpp_python](https://github.com/abetlen/llama-cpp-python), which needs models to be in a `*.GGUF` format. -However, most models available on [huggingface](https://huggingface.co/models) are not provided directly as `*.gguf` files. More often they are provided as a set of `*.bin` files with some additional metadata files that are produced when the model is originally trained. +However, most models available on [huggingface](https://huggingface.co/models) are not provided directly as `*.GGUF` files. More often they are provided as a set of `*.bin` or `*.safetensor` files with some additional metadata produced when the model is trained. -There are of course a number of users on huggingface who provide `*gguf` versions of popular models. But this introduces an unnecessary interim dependency as well as possible security or licensing concerns. +There are of course a number of users on huggingface who provide `*.GGUF` versions of popular models. But this introduces an unnecessary interim dependency as well as possible security or licensing concerns. -To avoid these concerns and provide users with the maximum freedom of choice for their models, we provide a tool to quickly and easily convert and quantize a model on huggingface into a `*gguf` format for use with Locallm. +To avoid these concerns and provide users with the maximum freedom of choice for their models, we provide a tool to quickly and easily convert and quantize a model from huggingface into a `*.GGUF` format for use with our `*.GGUF` compatible model servers. ![](/assets/model_converter.png) @@ -19,7 +19,7 @@ podman build -t converter . ## Quantize and Convert -You can run the conversion image directly with Podman in the terminal. You just need to provide it with the huggingface model you want to download, the quantization level you want to use and whether or not you want to keep the raw files after conversion. +You can run the conversion image directly with podman in the terminal. You just need to provide it with the huggingface model name you want to download, the quantization level you want to use and whether or not you want to keep the raw files after conversion. ```bash podman run -it --rm -v models:/converter/converted_models -e HF_MODEL_URL= -e QUANTIZATION=Q4_K_M -e KEEP_ORIGINAL_MODEL="False" @@ -33,12 +33,12 @@ streamlit run convert_models/ui.py ## Model Storage and Use -This process writes the models into a Podman volume under a `gguf/` directory and not directly back to the user's host machine (This could be changed in an upcoming update if it is required). +This process writes the models into a podman volume under a `gguf/` directory and not directly back to the user's host machine (This could be changed in an upcoming update if it is required). -If a user wants to access these models to use with the llamacpp model-service, they would simply point their model-service volume mount to the Podman volume created here. For example: +If a user wants to access these models to use with the llamacpp_python model server, they would simply point their model service to the correct podman volume at run time. For example: -``` -podman run -it -p 8001:8001 -v models:/locallm/models:Z -e MODEL_PATH=models/gguf/ -e HOST=0.0.0.0 -e PORT=8001 llamacppserver +```bash +podman run -it -p 8001:8001 -v models:/opt/app-root/src/converter/converted_models/gguf:Z -e MODEL_PATH=/gguf/ -e HOST=0.0.0.0 -e PORT=8001 llamacpp_python ```