Skip to content

Commit

Permalink
[Example] ggml: move belle-chat and codellama to llama-utils
Browse files Browse the repository at this point in the history
Signed-off-by: dm4 <[email protected]>
  • Loading branch information
dm4 committed Dec 22, 2023
1 parent 12c4aa3 commit f0f4910
Show file tree
Hide file tree
Showing 8 changed files with 2 additions and 658 deletions.
11 changes: 0 additions & 11 deletions wasmedge-ggml-belle-chat/Cargo.toml

This file was deleted.

153 changes: 1 addition & 152 deletions wasmedge-ggml-belle-chat/README.md
Original file line number Diff line number Diff line change
@@ -1,152 +1 @@
# Chat with `BELLE-Llama2-13B-GGUF` using WASI-NN with GGML Backend

## Requirement

### For macOS (apple silicon)

Install WasmEdge 0.13.4+WASI-NN ggml plugin(Metal enabled on apple silicon) via installer

```bash
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugin wasi_nn-ggml
# After install the wasmedge, you have to activate the environment.
# Assuming you use zsh (the default shell on macOS), you will need to run the following command
source $HOME/.zshenv
```

### For Ubuntu (>= 20.04)

Because we enabled OpenBLAS on Ubuntu, you must install `libopenblas-dev` by `apt update && apt install -y libopenblas-dev`.

Install WasmEdge 0.13.4+WASI-NN ggml plugin(OpenBLAS enabled) via installer

```bash
apt update && apt install -y libopenblas-dev # You may need sudo if the user is not root.
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugin wasi_nn-ggml
# After install the wasmedge, you have to activate the environment.
# Assuming you use bash (the default shell on Ubuntu), you will need to run the following command
source $HOME/.bashrc
```

### For General Linux

Install WasmEdge 0.13.4+WASI-NN ggml plugin via installer

```bash
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugin wasi_nn-ggml
# After install the wasmedge, you have to activate the environment.
# Assuming you use bash (the default shell on Ubuntu), you will need to run the following command
source $HOME/.bashrc
```

## Prepare WASM application

### (Recommend) Use the pre-built one bundled in this repo

We built a wasm of this example under the folder, check `wasmedge-ggml-belle-chat.wasm`

### (Optional) Build from source

If you want to do some modifications, you can build from source.

Compile the application to WebAssembly:

```bash
cargo build --target wasm32-wasi --release
```

The output WASM file will be at `target/wasm32-wasi/release/`.

```bash
cp target/wasm32-wasi/release/wasmedge-ggml-belle-chat.wasm ./wasmedge-ggml-belle-chat.wasm
```

## Get Model

In this example, we are going to use `BELLE-Llama2-13B-chat-0.4M-q4_0.gguf`.

Download llama model:

```bash
curl -LO https://huggingface.co/apepkuss79/BELLE-Llama2-13B-Chat-GGUF/resolve/main/BELLE-Llama2-13B-chat-0.4M-q4_0.gguf
```

## Execute

Execute the WASM with the `wasmedge` using the named model feature to preload large model:

```bash
wasmedge --dir .:. \
--nn-preload default:GGML:CPU:BELLE-Llama2-13B-chat-0.4M-q4_0.gguf \
wasmedge-ggml-belle-chat.wasm --model-alias default
```

After executing the command, you may need to wait a moment for the input prompt to appear.
You can enter your question once you see the `[USER]:` prompt:

```console
[USER]:
What's the capital of France?
[ASSISTANT]:
The capital of France is Paris.
[USER]:
挪威的首都是哪里?
[ASSISTANT]:
挪威的首都是奥斯陆。
[USER]:
如果一个苹果5元钱,我有两个苹果,一共花了多少钱?
[ASSISTANT]:
如果一个苹果元钱,两个苹果,一共花了2元钱。
[USER]:
如果我有三个苹果要花多少钱?
[ASSISTANT]:
如果一个苹果元钱,三个苹果,一共花了3元钱。
```

## Errors

- After running `apt update && apt install -y libopenblas-dev`, you may encountered the following error:

```bash
...
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
```

This indicates that you are not logged in as `root`. Please try installing again using the `sudo` command:

```bash
sudo apt update && sudo apt install -y libopenblas-dev
```

- After running the `wasmedge` command, you may received the following error:

```bash
[2023-10-02 14:30:31.227] [error] loading failed: invalid path, Code: 0x20
[2023-10-02 14:30:31.227] [error] load library failed:libblas.so.3: cannot open shared object file: No such file or directory
[2023-10-02 14:30:31.227] [error] loading failed: invalid path, Code: 0x20
[2023-10-02 14:30:31.227] [error] load library failed:libblas.so.3: cannot open shared object file: No such file or directory
unknown option: nn-preload
```

This suggests that your plugin installation was not successful. To resolve this issue, please attempt to install your desired plugin again.

## Parameters

Currently, we support the following parameters:

- `LLAMA_LOG`: Set it to a non-empty value to enable logging.
- `LLAMA_N_CTX`: Set the context size, the same as the `--ctx-size` parameter in llama.cpp (default: 512).
- `LLAMA_N_PREDICT`: Set the number of tokens to predict, the same as the `--n-predict` parameter in llama.cpp (default: 512).

These parameters can be set by adding the following environment variables before the `wasmedge` command:

```bash
LLAMA_LOG=1 LLAMA_N_CTX=2048 LLAMA_N_PREDICT=512 \
wasmedge --dir .:. \
--nn-preload default:GGML:CPU:llama-2-7b-chat.Q5_K_M.gguf \
wasmedge-ggml-llama-interactive.wasm --model-alias default --ctx-size 2048
```

## Credit

The WASI-NN ggml plugin embedded [`llama.cpp`](git://github.com/ggerganov/llama.cpp.git@b1217) as its backend.
The `wasmedge-ggml-belle-chat` example is not maintained anymore. Please refer to [llama-utils](https://github.com/second-state/llama-utils) for the latest updates.
155 changes: 0 additions & 155 deletions wasmedge-ggml-belle-chat/src/main.rs

This file was deleted.

Binary file not shown.
11 changes: 0 additions & 11 deletions wasmedge-ggml-codellama/Cargo.toml

This file was deleted.

Loading

0 comments on commit f0f4910

Please sign in to comment.