-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add base API URL field for Ollama and OpenAI embedding models #1136
Conversation
Jupyter AI currently allows the user to call a model at a URL (location) different from the default one by specifying a selected Base API URL. This can be done for Ollama, OpenAI provider models. However, for these providers, there is no way to change the API URL for embedding models when using the `/learn` command in RAG mode. This PR adds an extra field to make this feasible. Tested as follows for Ollama: [1] Start the Ollama system from port 11435 instead 11434 (the default): `OLLAMA_HOST=127.0.0.1:11435 ollama serve` [2] Set the Base API URL: [3] Check that the new API URL works:
for more information, see https://pre-commit.ci
I noticed a bug on I'm pushing a change that alters the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Tested
/learn
and/ask
with haiku-3.5 and titan-embed-v1. ✅ (No Base API URL) - Tested
/learn
and/ask
with ollama-llama3.2 and ollama-mxbai-embed-large. ✅ (No Base API URL) -- all next tests with Ollama (to test changes in ollama.py ). - Tested again with Base API URL = 11434 (the default explicitly). ✅
- Tested again after clearing out the Base API URL, still works ✅
- Restarted Ollama with port=12345. Added this to the Base API URL and it all works as expected. ✅
- Removed the custom Base API URL (blank field) and the /learn and /ask commands now fail, as they should, because Ollama is still running on the custom port. :gr-checkmark:
- Leaving custom fields blank, restarted Ollama to return to default API URL and everything works as expected. ✅
- With OpenAI embeddings, left the Base API URL blank (it works ✅), then added the URL (it works ✅) and then deleted the URL (and it still works ✅, confirms the change in
config_manager.py
is implemented).
Code looks good as well.
Kicking CI since the RTD workflow has stalled. |
@meeseeksdev please backport to v3-dev |
…enAI embedding models
…ding models (#1149) Co-authored-by: Sanjiv Das <[email protected]>
Description
BedrockEmbeddingsProvider
#493Jupyter AI currently allows the user to call a model at a URL (location) different from the default one by specifying a selected Base API URL. This can be done for Ollama, OpenAI provider models. However, for these providers, there is no way to change the API URL for embedding models when using the
/learn
command in RAG mode. This PR adds an extra field to make this feasible.Testing instructions
Testing as follows for Ollama:
[1] Start the Ollama system from port 11435 instead 11434 (the default):
OLLAMA_HOST=127.0.0.1:11435 ollama serve
[2] Set the Base API URL:
[3] Check that the new API URL works for completions and
/learn
: