- Make llama.cpp Chat Generator compatible with new
ChatMessage
(#1254)
- Do not retry tests in
hatch run test
command (#954)
- Adopt uv as installer (#1142)
- Update ruff linting scripts and settings (#1105)
- Unpin
llama-cpp-python
(#1115) - Fix linting/isort (#1215)
- Use text instead of content for ChatMessage in Llama.cpp, Langfuse and Mistral (#1238)
- Chore: lamma_cpp - ruff update, don't ruff tests (#998)
- Fix: pin
llama-cpp-python<0.3.0
(#1111)
- Replace DynamicChatPromptBuilder with ChatPromptBuilder (#940)
- Retry tests to reduce flakyness (#836)
- Update ruff invocation to include check parameter (#853)
- Pin
llama-cpp-python>=0.2.87
(#955)
- Ci: install
pytest-rerunfailures
where needed; add retry config totest-cov
script (#845) - Fix: pin llama-cpp-python to an older version (#943)
- Refactor: introduce
_convert_message_to_llamacpp_format
utility function (#939)
- Llama.cpp: change wrong links and imports (#436)
- Fix order of API docs (#447)
- Update category slug (#442)
- Small consistency improvements (#536)
- Disable-class-def (#556)
- [breaking] Rename model_path to model in the Llama.cpp integration (#243)
- Generate api docs (#353)
- Model_name_or_path > model (#418)
- Llama.cpp - review docstrings (#510)
- Llama.cpp - update examples (#511)
- Make tests show coverage (#566)
- Remove references to Python 3.7 (#601)
- Chore: add license classifiers (#680)
- Chore: change the pydoc renderer class (#718)
- Basic implementation of llama.cpp chat generation (#723)
- Update import paths for beta5 (#233)
- Mount llama_cpp in haystack_integrations (#217)
- Add Llama.cpp Generator (#179)