Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreML Model Download and Execution Functionality #19

Open
1 task done
PSchmiedmayer opened this issue Jul 19, 2023 · 1 comment
Open
1 task done

CoreML Model Download and Execution Functionality #19

PSchmiedmayer opened this issue Jul 19, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@PSchmiedmayer
Copy link
Member

Problem

A lot of different models such as LLMs are large models, even when transformed into potentially on-device executable versions using CoreML. It is impractical to ship these models with a mobile application. Even when we download and abstract these models it requires some UI and progress indicators to communicated the implications to the user.

Solution

All building blocks to create a good integration into SpeziML are in place.

  1. Apple CoreML already provides the functionality to downloading and compiling a model on the user’s device: https://developer.apple.com/documentation/coreml/downloading_and_compiling_a_model_on_the_user_s_device
  2. Hugging Face hosts CoreML models in their data repository what we would need to download, e.g. Llama 2: https://huggingface.co/pcuenq/Llama-2-7b-chat-coreml
  3. We can use SwiftUI to create a nice downloading progress API that track the progress of downloading the model and making it ready to be executed.

Similar to #18, we should add some sort of abstraction layer to the API to enable a reuse across different models, maybe initially focusing on the Hugging Face and LLM use case.
Testing this functionality is probably best done on a macOS machine. This might require some smaller changes to the framework.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines
@PSchmiedmayer PSchmiedmayer added the enhancement New feature or request label Jul 19, 2023
@PSchmiedmayer PSchmiedmayer moved this to Backlog in Project Planning Jul 19, 2023
@philippzagar philippzagar self-assigned this Aug 7, 2023
@philippzagar philippzagar moved this from Backlog to In Progress in Project Planning Oct 20, 2023
@philippzagar philippzagar removed their assignment Mar 7, 2024
@philippzagar
Copy link
Member

Sadly, in its current state, CoreML is not optimized to run LLMs and therefore is way too slow for local LLM execution.
Currently, SpeziLLM provides local inference functionality via llama.cpp, but that may change if Apple updates CoreML in this year's WWDC.

@philippzagar philippzagar moved this from In Progress to Backlog in Project Planning Mar 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Backlog
Development

No branches or pull requests

2 participants