-
Notifications
You must be signed in to change notification settings - Fork 134
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #231 from tikikun/main
chore: version pump
- Loading branch information
Showing
1 changed file
with
1 addition
and
1 deletion.
There are no files selected for viewing
Submodule llama.cpp
updated
19 files
+5 −0 | CMakeLists.txt | |
+4 −0 | Makefile | |
+130 −1 | convert-hf-to-gguf.py | |
+1 −1 | examples/batched-bench/batched-bench.cpp | |
+3 −7 | examples/batched.swift/Sources/main.swift | |
+16 −8 | examples/llama.swiftui/llama.cpp.swift/LibLlama.swift | |
+1 −0 | examples/server/api_like_OAI.py | |
+2 −4 | examples/server/server.cpp | |
+1 −1 | ggml-alloc.c | |
+87 −43 | ggml-cuda.cu | |
+28 −17 | ggml-metal.m | |
+93 −93 | ggml-metal.metal | |
+65 −36 | ggml.c | |
+8 −0 | ggml.h | |
+20 −0 | gguf-py/gguf/constants.py | |
+10 −8 | gguf-py/gguf/tensor_mapping.py | |
+279 −30 | llama.cpp | |
+1 −0 | prompts/chat-with-qwen.txt | |
+3 −0 | requirements-hf-to-gguf.txt |