-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use any Ollama models, although Ollama itself is working fine. #259
Comments
In your .env.local file, are you adding http://localhost:11434 as your Ollama base URL? If not, please duplicate .env.example to .env.local, add that entry, and try again. |
I’m not sure at the moment (any may depend on your execution context) if you need to use 127.0.0.1 instead of localhost. Localhost has been working for me more or less from the start. |
the problem is not even with the problem is when conversation start, it immediately gives error. |
Oh, you're running from docker. You should have an .env.local file with the following specified for the Ollama base URL: # You only need this environment variable set if you want to use oLLAMA models
# EXAMPLE http://localhost:11434
OLLAMA_API_BASE_URL=http://host.docker.internal:11434 Your oTToDev instance is trying to locate Ollama at http://127.0.0.1:11434, which is not valid while you're running within a Docker container based on this output in the video you posted (thanks for the video!) Please update .env.local, rebuild based on docker instructions and then re-run the container. Hopefully that takes care of it. |
I had the same problem. I just reinstalled it and everything worked. |
If the above host.docker.internal change fixes this for you, please respond so we can close this issue. |
I am also using docker, and I have the same issue as OP. Unfortunately, I was not able to get it running using the change to OLLAMA_API_BASE_URL. In fact, that doesn't make sense to me because, as OP mentioned, this command works |
Hi.. thank you to help me resolve this, but sadly it's still not working. as I said in the first post, the oTToDev already see the model (so no matter it's 127.0.0.1, localhost, or host.docker.internal) here the console of Ollama, it succeed to serve the it shows so Rekaman.2024.11.13.054919.mp4 |
I am having a similar issue where the app sees the Ollama models, but the chat fails. What model is it trying to send the request to? For me, no matter what Ollama model I chose I always tries to send the request to Claude 3.5-Sonnet which makes it fail. Same problem when I try to use LMStudio. All local LLMs appear to be broken. |
All Ollama models can be listed in oTToDev, but they fail whenever a conversation starts, all of them. |
same here, pulled the latest version and i have my OLLAMA_BASE_URL set properly, problem is when choosing Ollama, i can see my downloaded models, but for some reason when you hit enter to code something, the app si trying to get "claude-3-5-sonnet-latest" instead of the selected Ollama model:
later edit with additional information: In dev tools in browser (latest Chrome) the payload sent to the /api/chat:
So the model and provider are properly sent to the app. This is picked up by Also, from the error it seems that the provider is properly selected but not the model, and it defaults to DEFAULT_MODEL which is claude-3-5-sonnet-latest. LATER EDIT 2: Problem is in PR #188 which keeps the user choice and introduces regex match for provider and model. I added some console.log entries in stream-text.ts file and here is the output while choosing Ollama and code-qwen2.5 in the frontend: more debug:
REGEX is failing because the MODEL_LIST doesn't contain the OLLAMA models (i have 2 of them: qwen2.5-coder:7b and llama-3.1:latest) which are not in the list, hence the REGEX match failing and app defaulting to DEFAULT_MODEL and DEFAULT_PROVIDER. |
workaround: if you are running in docker and you use an external URL for OLLAMA, just set in docker-compose.yaml 'RUNNING_IN_DOCKER=false' otherwise the app will silently use host.docker.internal URL in this function here:
So if RUNNING_IN_DOCKER is true, the Base URL for Ollama in the BACKEND will be set to I don't know or found out where is that happening This is a major inconsistency of the app. The BASE URL should be retrieved from |
this is ture! i solved problem (running in docker and using openailike |
Unfortunately, it did not solve my problem. I still have an exception, but mine appears just a little different:
This error occurs with RUNNING_IN_DOCKER set to either true or false, and all combinations of OLLAMA_API_BASE_URL mentioned above |
Thanks for the context, this sounds like an edge case based on remembering the last selected provider/model and that update of the fields registering as a model change. There were a couple of PRs in flight recently around this, making a note as I'm currently looking at env vars in general. |
Note: To review this behavior with updates to providers in #251. |
As i see in your screenshot, the Also, make sure you are running this in the same docker network as ollama and webui containers. If it is so, you need to get the container IP address for Ollama container , |
I have updated my .dockerignore and commented out the lines to pull in the .env file. Thank you. This clears the error on startup regarding the .env file missing. However, this did not solve the issue for me. When I rebuild and compose I am still unable to see any models in the dropdown for Ollama. When I console into the running bolt container I can run "curl http://172.16.1.19:11434/api/tags" and I get a response with a list of the models running in Ollama container. 172.16.1.19 is the host IP of my docker instance running on Ubuntu VM. Both the Ollama container and the Bolt container are running on the same host. How can I continue to look into why the models are not populating into the dropdown? |
I sould add that this issue happens with the docker container as well as running direct with pnmp. |
this issue is not about if the check the first post, it's run by meaning it's directly using the problem is entirely different. |
Found a solution that worked in my case. The issue was two-fold:
// Before (incorrect) // After (fixed) In docker-compose.yaml, I needed to set up the correct networking and Ollama URL: services: networks: These changes allowed the application to: Properly pass the model name to Ollama instead of trying to use the URL as the model name After making these changes and rebuilding with: docker compose --profile development down The Ollama integration started working correctly. This seems to be related to but separate from the model selection issue others are experiencing with Claude defaults. Only issue now is that though even though it's writing code, the Preview window in bolt.new UI isn't working at all. After fixing the connection issue, I noticed that while Ollama is now able to generate code responses, the Preview window in the UI isn't working at all. This appears to be a separate issue from the initial connection problem. Steps to reproduce the Preview issue:
Can anyone else confirm if they're seeing the same behavior with the Preview functionality after getting Ollama working? Environment details:
|
It's working for me now. When I initially tried to install oTToDev the docker build failed, and so did the regular Windows install method. I finally got it to install inside of WSL, but then Ollama did not work. It showed the models but all requests went to Claude Sonnet 3.5 no matter what I had selected, and I obviously am not running Clause Sonnet. I was able to get the normal Windows install method and everything works now. I would ideally like to get this running in Docker, but continue to have issues building the container. |
I had the same problem (not docker version, just the Windows install). I found out that on my setup i had to just fill in something in the .env.local in the Anthropic key (ANTHROPIC_API_KEY=dw23423NOAI) . And after that it worked fine.. |
I got the same issue here. I'm on a MacOS and deployed it using Docker and I tried the "fix" of changing the RUNNING_IN_DOCKER to false, but that didn't resolve the issue. As you can see below, the model that's being referenced is still calude 3.5
|
Same issue here, installed bolt first time today. Bolt can see ollama and load models, but it cannot chat with it. Ollama doesn't even see the request. Errors are the same as OP. |
How are people getting their models for ollama to show up in the drop down? Hahaha. I can't even get that far. I was able to deploy OpenHands and use my Ollama instance with no issues at all. |
refresh browser, wait, change to other source, back to Ollama, the model list from Ollama will show. |
Thanks for the info, Ollama is up top for me in terms of provider issue resolution; going to spend some time with these issues starting tomorrow |
Since you will focus on it, here is another small issue. Whenever you choose Ollama and you get the models in dropdown list, if you refresh the page, the Ollama provider remains selected (because of PR 188) but the model list is empty. You have to choose another provider, and choose Ollama back to see the models again. Might happen with other providers such as OpenAILike and others that pull dynamically the model list. Once the page is refreshed, the Ollama provider is chosen, no models appear in the list, and if you give it a prompt, it will throw an error and going again back to Claude-3.5-Sonnet which is the default one, even though the local storage of the browser has the Ollama provider and the chosen model you had before in the cookies. This happens until you select another provider with a static model and then select Ollama again . |
I'm one of the people who has seen it work, but most of the time my list is empty for Ollama |
There's an easy fix without any hardcoding: Open Terminal: Then, Start Bolt application, refresh the browser, choose ollama models and it should be working. |
So just for my understanding: Running Bolt with local Ollama is currently broken? Would it maybe be possible/smart to add a big, highlighted banner to the README so that others do not try to get this running for hours like I did and know they will just have to wait? |
Well... me I have this issue but I manage to hard fix the issue of running local Ollama at localhost/127.0.0.1:11434 ... there are some bugs in current project preventing local use Not sure why I'm getting always the claude-sonnet model instead of the model selected in the dropdown, for me a custom llama3.1:8b with the ctx windows of 32k by now First set your model at app/lib/.server/llm/stream-text.ts in the method streamText, my case I set : Second, not sure why but it's attempting to connect to IPV6 localhost instead of IPV4, so I'm hardcoding in the getOllamaModel method @ app/lib/.server/llm/model.ts : And that's it, that's my hardfix which allow me to make it work by now, created the fork and PR just in case you might want to pull from my fork |
@dctfor At least I can get something done until this is being addressed. Thanks a lot! |
@TheFoxStudio Then will review why the dropdown is not working as expected and tries to send claude-sonnet |
The issue with the DEFAUL_MODEL ´claude-3-5-sonnet-latest´ being taken instead of the selected one, is because by default is not set the static models, so when it asks for the available models in the online Ollama that can ask with /api/tags, is not updating the staticModels for the streamText method, tried to filter by any ollama provider in the model list but is empty |
Originally it used to send the message as it is because it used only one provider Then it was adjusted for allowing virtually any provider, but the issue is with the model_list, so far I'm unable to fix that bug keeping the filtering in the |
And now is fixed on my PR, with that change virtually will allow any model from the source you set in the env vars, no issue with this simple change on my local. |
I tried your branch and I see no change other than in the error the correct model is listed. |
Ok so using @dctfor 's fixed branch and adding to the docker compose:
I am finally able to get responses from chatting. I did not try network mode host before trying his branch. |
finally 🥹 Desktop.2024.11.18.20.29.40.04.mp4 |
removing the .env.local defintion from the .dockerignore file did the trick for me! thanks! |
@chrismahoney @coleam00 I proposed a fix for this issue in #344 |
I had the same issue. Im running locally (not in Docker). All I needed to do was set the base url for Ollama in my env.local to be http://127.0.0.1:11434 (not localhost) and it now works fine. |
This issue has been marked as stale due to inactivity. If no further activity occurs, it will be closed in 7 days. |
Describe the bug
I have Ollama installed in Windows 11 24H2, default port 11434.
I install bolt.new-any-llm on WSL2 Debian 12 (mirrored network mode)
I can access and use Ollama just fine from WSL2 Debian bash terminal
wsluser@DESKTOP:~/labs/bolt.new-any-llm$ curl http://127.0.0.1:11434 Ollama is running
I can even try to use it to ask "why is the sky blue?" question from WSL2 Debian bash using
curl
, so the network / connection is not a problem.I set the
.env.local
for Ollama base URL withhttp://127.0.0.1:11434
I run the container with that
.env.local
filethe command is
docker compose --profile development --env-file .env.local up
the container runs fine, I opened up browser the Bolt can see the Ollama LLM lists:
but when the conversation starts, it gives error RIGHT AWAY.
in bash console there error accessing Ollama:
it cannot access
http://127.0.0.1:11434/api/chat
? or what?is it POST vs GET thingy that Ollama have problem with?
Link to the Bolt URL that caused the error
http://localhost:5173
Steps to reproduce
127.0.0.1:11434
) inside Linux WSL2..env.local
docker compose --profile development --env-file .env.local up
Expected behavior
Error.
Screen Recording / Screenshot
Rekaman.2024.11.12.192136.mp4
Platform
Additional context
No response
The text was updated successfully, but these errors were encountered: