Skip to content

basic implementation of llama.cpp chat generation #193

basic implementation of llama.cpp chat generation

basic implementation of llama.cpp chat generation #193

Triggered via pull request May 12, 2024 05:43
@lbuxlbux
synchronize #723
main
Status Cancelled
Total duration 49s
Artifacts

llama_cpp.yml

on: pull_request
Matrix: run
Fit to window
Zoom out
Zoom in

Annotations

14 errors
Python 3.10 on Windows
Canceling since a higher priority waiting request for 'llama_cpp-main' exists
Python 3.10 on Windows
The operation was canceled.
Python 3.9 on Windows
Canceling since a higher priority waiting request for 'llama_cpp-main' exists
Python 3.9 on Windows
The operation was canceled.
Python 3.10 on Linux
Canceling since a higher priority waiting request for 'llama_cpp-main' exists
Python 3.10 on Linux
The operation was canceled.
Python 3.9 on macOS
Canceling since a higher priority waiting request for 'llama_cpp-main' exists
Python 3.9 on macOS
The operation was canceled.
Python 3.9 on macOS
[notice] A new release of pip is available: 22.0.4 -> 24.0 [notice] To update, run: python3.9 -m pip install --upgrade pip
Python 3.9 on Linux
Canceling since a higher priority waiting request for 'llama_cpp-main' exists
Python 3.9 on Linux
The operation was canceled.
Python 3.10 on macOS
Canceling since a higher priority waiting request for 'llama_cpp-main' exists
Python 3.10 on macOS
[notice] A new release of pip is available: 23.0.1 -> 24.0 [notice] To update, run: python3.10 -m pip install --upgrade pip
Python 3.10 on macOS
The operation was canceled.