-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #342 from janhq/update-submodule-2024-12-18-17-01
Update llama.cpp submodule to latest release b4354
- Loading branch information
Showing
1 changed file
with
1 addition
and
1 deletion.
There are no files selected for viewing
Submodule llama.cpp
updated
20 files
+1 −1 | README.md | |
+0 −6 | convert_hf_to_gguf.py | |
+0 −1 | convert_hf_to_gguf_update.py | |
+6 −2 | examples/server/README.md | |
+54 −20 | examples/server/server.cpp | |
+12 −4 | examples/server/tests/unit/test_completion.py | |
+59 −0 | examples/server/tests/unit/test_embedding.py | |
+23 −0 | examples/server/tests/unit/test_rerank.py | |
+13 −6 | examples/server/utils.hpp | |
+0 −1 | ggml/src/ggml-alloc.c | |
+1 −1 | ggml/src/ggml-cpu/CMakeLists.txt | |
+3 −0 | ggml/src/ggml-cpu/ggml-cpu.cpp | |
+16 −0 | ggml/src/ggml-impl.h | |
+1 −1 | ggml/src/ggml-vulkan/vulkan-shaders/soft_max.comp | |
+26 −43 | ggml/src/ggml.c | |
+1 −1 | scripts/sync-ggml.last | |
+8 −26 | src/llama.cpp | |
+1 −0 | tests/CMakeLists.txt | |
+2 −2 | tests/test-backend-ops.cpp | |
+1,303 −0 | tests/test-gguf.cpp |