A finetuner1 2 for LLMs on Intel XPU devices, with which you could finetune the openLLaMA-3b model to sound like your favorite book.
conda env create -f env.yml
conda activate pyt_llm_xpu
Warning: OncePyTorch and intel extension for PyTorch is already setup, then install peft without dependencies as peft requires PyTorch 2.0(not supported yet on Intel XPU devices.)
Fetch a book from guttenberg (default: pride and prejudice) and generate the dataset.
python fetch_data.py
python finetune.py --input_data ./book_data.json --batch_size=64 --micro_batch_size=16 --num_steps=300
For inference, you can either provide a input prompt, or the model will take a default prompt
python inference.py --infer
python inference.py --infer --prompt "my prompt"
python inference.py --bench