Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Modelscope option for chatglm3 on GPU #12545

Merged
merged 6 commits into from
Dec 16, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ChatGLM3

In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on ChatGLM3 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as a reference ChatGLM3 model.
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on ChatGLM3 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) (or [ZhipuAI/chatglm3-6b](https://www.modelscope.cn/models/ZhipuAI/chatglm3-6b) for ModelScope) as a reference ChatGLM3 model.

## 0. Requirements
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
Expand All @@ -13,6 +13,9 @@ conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

# [optional] only needed if you would like to use ModelScope as model hub
pip install modelscope==1.11.0
Oscilloscope98 marked this conversation as resolved.
Show resolved Hide resolved
```

### 1.2 Installation on Windows
Expand All @@ -23,6 +26,9 @@ conda activate llm

# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

# [optional] only needed if you would like to use ModelScope as model hub
pip install modelscope==1.11.0
ATMxsp01 marked this conversation as resolved.
Show resolved Hide resolved
```

## 2. Configures OneAPI environment variables for Linux
Expand Down Expand Up @@ -98,9 +104,10 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```
Oscilloscope98 marked this conversation as resolved.
Show resolved Hide resolved

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `ZhipuAI/chatglm3-6b` for **ModelScope**.
Oscilloscope98 marked this conversation as resolved.
Show resolved Hide resolved
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.

#### Sample Output
#### [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b)
Expand Down Expand Up @@ -146,3 +153,4 @@ Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`.
- `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`.
- `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
19 changes: 15 additions & 4 deletions python/llm/example/GPU/HuggingFace/LLM/chatglm3/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,24 +20,34 @@
import numpy as np

from ipex_llm.transformers import AutoModel
from transformers import AutoTokenizer

# you could tune the prompt based on your own model,
# here the prompt tuning refers to https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md
CHATGLM_V3_PROMPT_FORMAT = "<|user|>\n{prompt}\n<|assistant|>"

if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for ChatGLM3 model')
parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b",
parser.add_argument('--repo-id-or-model-path', type=str,
help='The huggingface repo id for the ChatGLM3 model to be downloaded'
Oscilloscope98 marked this conversation as resolved.
Show resolved Hide resolved
', or the path to the huggingface checkpoint folder')
parser.add_argument('--prompt', type=str, default="AI是什么?",
help='Prompt to infer')
parser.add_argument('--n-predict', type=int, default=32,
help='Max tokens to predict')
parser.add_argument('--modelscope', action="store_true", default=False,
help="Use models from modelscope")

args = parser.parse_args()
model_path = args.repo_id_or_model_path

if args.modelscope:
from modelscope import AutoTokenizer
model_hub = 'modelscope'
else:
from transformers import AutoTokenizer
model_hub = 'huggingface'

model_path = args.repo_id_or_model_path if args.repo_id_or_model_path else \
("ZhipuAI/chatglm3-6b" if args.modelscope else "THUDM/chatglm3-6b")
ATMxsp01 marked this conversation as resolved.
Show resolved Hide resolved

# Load model in 4 bit,
# which convert the relevant layers in the model into INT4 format
Expand All @@ -47,7 +57,8 @@
load_in_4bit=True,
optimize_model=True,
trust_remote_code=True,
use_cache=True)
use_cache=True,
model_hub=model_hub)
model = model.half().to('xpu')

# Load tokenizer
Expand Down
22 changes: 17 additions & 5 deletions python/llm/example/GPU/HuggingFace/LLM/chatglm3/streamchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,21 +20,32 @@
import numpy as np

from ipex_llm.transformers import AutoModel
from transformers import AutoTokenizer


if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Stream Chat for ChatGLM3 model')
parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b",
parser.add_argument('--repo-id-or-model-path', type=str,
help='The huggingface repo id for the ChatGLM3 model to be downloaded'
', or the path to the huggingface checkpoint folder')
parser.add_argument('--question', type=str, default="晚上睡不着应该怎么办",
help='Qustion you want to ask')
parser.add_argument('--disable-stream', action="store_true",
help='Disable stream chat')
parser.add_argument('--modelscope', action="store_true", default=False,
help="Use models from modelscope")

args = parser.parse_args()
model_path = args.repo_id_or_model_path

if args.modelscope:
from modelscope import AutoTokenizer
model_hub = 'modelscope'
else:
from transformers import AutoTokenizer
model_hub = 'huggingface'

model_path = args.repo_id_or_model_path if args.repo_id_or_model_path else \
("ZhipuAI/chatglm3-6b" if args.modelscope else "THUDM/chatglm3-6b")

disable_stream = args.disable_stream

# Load model in 4 bit,
Expand All @@ -44,8 +55,9 @@
model = AutoModel.from_pretrained(model_path,
load_in_4bit=True,
trust_remote_code=True,
optimize_model=True)
model.to('xpu')
optimize_model=True,
model_hub=model_hub)
model.half().to('xpu')

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path,
Expand Down
Loading