-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
roadmap: llamacpp-engine to align with llama.cpp upstream #1728
Labels
type: epic
A major feature or initiative
Comments
I agree that we should align with the llama.cpp upstream, but I have several concerns:
|
Closed
dan-homebrew
changed the title
epic: llamacpp-engine to align with llama.cpp upstream
roadmap: llamacpp-engine to align with llama.cpp upstream
Dec 15, 2024
Task list:
Related tickets need to be tested and verified:
Approach 1:
Approach 2: Build llama.cpp server as a library and load it into
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Goal
llamacpp-engine
Can we consider refactoring llamacpp-engine to use the server implementation, and maintain a fork with our improvements to speech, vision etc? This is especially if we do a C++ implementation of whisperVQ in the future.
The text was updated successfully, but these errors were encountered: