-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run locally? #10
Comments
Was trying to figure this out too, surely it must be possible without a togetherai api key? |
Looks like it's currently hardcoded to use togetherai, but actually only needs any openai compatible endpoint: https://github.com/togethercomputer/MoA/blob/main/utils.py#L28 |
@sammcj that's what I was thinking as well |
Something like this I think #12 |
i had a quick go at this as well over here: #13 it is not very configurable, but if you have an openai compatible BE hosting several models, you can just set should be easy to split out a streaming call for the aggregator as well. |
Got it all working - https://github.com/sammcj/MoA |
@IsThatYou hi, for the function https://github.com/togethercomputer/MoA/blob/main/utils.py#L99, in fact, it is a compatiable openai api for backend fastchat or vllm, which supports various open-llms, am I right ? It is not a true original OpenAI ? |
That's correct. We haven't tested with other APIs but it should be good with most openai compatible api. |
Was just wondering if i can run this locally?
The text was updated successfully, but these errors were encountered: