Skip to content

[CI] ggml: add llama2 7b chat examples #199

[CI] ggml: add llama2 7b chat examples

[CI] ggml: add llama2 7b chat examples #199

Workflow file for this run

name: ggml llama2 examples
on:
schedule:
- cron: "0 0 * * *"
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'info'
push:
branches: [ '*' ]
paths:
- ".github/workflows/llama.yml"
- "wasmedge-ggml-llama-interactive/**"
pull_request:
branches: [ '*' ]
paths:
- ".github/workflows/llama.yml"
- "wasmedge-ggml-llama-interactive/**"
jobs:
ubuntu:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install apt-get packages
run: |
echo RESET grub-efi/install_devices | sudo debconf-communicate grub-pc
sudo ACCEPT_EULA=Y apt-get update
sudo ACCEPT_EULA=Y apt-get upgrade
sudo apt-get install wget git curl software-properties-common build-essential libopenblas-dev
- name: Install Rust target for wasm
run: |
rustup target add wasm32-wasi
- name: Install WasmEdge + WASI-NN + GGML
run: |
VERSION=0.13.5
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | sudo bash -s -- -v $VERSION --plugins wasi_nn-ggml -p /usr/local
- name: Tiny Llama
run: |
cd wasmedge-ggml-llama-interactive
curl -LO https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/resolve/main/tinyllama-1.1b-chat-v0.3.Q5_K_M.gguf
cargo build --target wasm32-wasi --release
wasmedge --dir .:. \
--nn-preload default:GGML:AUTO:tinyllama-1.1b-chat-v0.3.Q5_K_M.gguf \
--env enable_log=true \
--env ctx_size=4096 \
--env stream_stdout=true \
--env n_gpu_layers=0 \
target/wasm32-wasi/release/wasmedge-ggml-llama-interactive.wasm \
default \
'<|im_start|>system\nYou are an AI assistant<|im_end|>\n<|im_start|>user\nWhere is the capital of Japan?<|im_end|>\n<|im_start|>assistant'
- name: llama2 7b
run: |
cd wasmedge-ggml-llama-interactive
curl -LO https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_M.gguf
cargo build --target wasm32-wasi --release
wasmedge --dir .:. \
--nn-preload default:GGML:AUTO:llama-2-7b-chat.Q5_K_M.gguf \
--env enable_log=true \
--env ctx_size=4096 \
--env stream_stdout=true \
--env n_gpu_layers=0 \
target/wasm32-wasi/release/wasmedge-ggml-llama-interactive.wasm \
default \
'[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you do not know the answer to a question, please do not share false information.\n<</SYS>>\nWhat is the capital of Japan?[/INST]'
macos:
strategy:
matrix:
include:
- name: MacOS-13
host_runner: macos-13
- name: MacOS-14
host_runner: macos-14
name: ${{ matrix.name }}
runs-on: ${{ matrix.host_runner }}
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Install Rust target for wasm
run: |
rustup target add wasm32-wasi
- name: Install WasmEdge + WASI-NN + GGML
run: |
VERSION=0.13.5
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | sudo bash -s -- -v $VERSION --plugins wasi_nn-ggml -p /usr/local
- name: Tiny Llama
run: |
cd wasmedge-ggml-llama-interactive
curl -LO https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/resolve/main/tinyllama-1.1b-chat-v0.3.Q5_K_M.gguf
cargo build --target wasm32-wasi --release
wasmedge --dir .:. \
--nn-preload default:GGML:AUTO:tinyllama-1.1b-chat-v0.3.Q5_K_M.gguf \
--env enable_log=true \
--env ctx_size=4096 \
--env stream_stdout=true \
--env n_gpu_layers=0 \
target/wasm32-wasi/release/wasmedge-ggml-llama-interactive.wasm \
default \
'<|im_start|>system\nYou are an AI assistant<|im_end|>\n<|im_start|>user\nWhere is the capital of Japan?<|im_end|>\n<|im_start|>assistant'
- name: llama2 7b
run: |
cd wasmedge-ggml-llama-interactive
curl -LO https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_M.gguf
cargo build --target wasm32-wasi --release
wasmedge --dir .:. \
--nn-preload default:GGML:AUTO:llama-2-7b-chat.Q5_K_M.gguf \
--env enable_log=true \
--env ctx_size=4096 \
--env stream_stdout=true \
--env n_gpu_layers=0 \
target/wasm32-wasi/release/wasmedge-ggml-llama-interactive.wasm \
default \
'[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you do not know the answer to a question, please do not share false information.\n<</SYS>>\nWhat is the capital of Japan?[/INST]'