v1.0.1
What's Changed
- Dynamic repo & readme by @ashish-aesthisia in #1
- feat: dynamic model by @parveen232 in #2
- Added custom model cache dir by @ashish-aesthisia in #3
- Model dir: List & Format by @ashish-aesthisia in #4
- Update how to use and contributing and uploaded the UI image by @Subhanshu0027 in #5
- feat: stop generation by @parveen232 in #6
- Added functionality to change from cuda to cpu or vice versa by @ashish-aesthisia in #7
- List & remove cached models + Updated blocks by @ashish-aesthisia in #8
- Update README.md by @ashish-aesthisia in #9
- 🧠 Error handling & Default Model Update by @ashish-aesthisia in #10
- Added requirements.txt by @ashish-aesthisia in #11
- Improved device management & execution by @ashish-aesthisia in #12
- Updated the image and added a command to install requirements.txt by @Subhanshu0027 in #13
- Fixed typo 👾 in README.md by @ashish-aesthisia in #14
- Fix: Remove Cached Models by @parveen232 in #16
- Update default model repo id by @parveen232 in #18
- Implement config.ini for enhanced persistence of settings by @Subhanshu0027 in #19
- Make chatbot fill screen height by @parveen232 in #20
- Implement command-line arguments for host, port and share by @Subhanshu0027 in #21
- Enable LLM inference with llama.cpp and llama-cpp-python by @parveen232 in #33
- fix: FileNotFoundError by @parveen232 in #36
- feat: streaming support to chatbot by @parveen232 in #37
- Update README.md by @ashish-aesthisia in #38
Full Changelog: https://github.com/Aesthisia/LLMinator/commits/v1.0.1