-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WARN Constants Failed to get LMStudio models: fetch failed #721
Comments
I’m experiencing the same thing even though I use Gemini models. I’ve never used Ollama but I’m getting the same alert. |
@domforson @Mhinolv @Arav-Shakya I tried the latest version and turned LM Studio off in the settings menu and this fixed it. |
yeah no chance to take full advantage of it if it dont support it. i was wondering if i run ollama in docker too might work. also not sure but i think it worked before and dont work now.. thanks anyone for advice... |
change your browser, or clear cookies / data. |
Changing .env doesnt work for me |
Hi I have a problem when I go to use Ollama, when I go to send it a message it gives me this error: There was an error processing your request: An error occurred. |
i'm having the same issue, some one know how to solve it? |
Provide more context with screenshot. |
can you show the terminal screenshot .. |
can you try if this PR works for you |
Describe the bug
I tried to run bolt.diy on my VM and it's giving me this WARN Constants Failed to get LMStudio models: fetch failed every time I give a prompt. I am use Ollama locally
Link to the Bolt URL that caused the error
http://localhost:5173/chat/4
Steps to reproduce
1.Create a VM on google cloud. ( Use L4 GPU, Machine type : g2-standard-16, 64gb memory )
2.Select windows server 2019
3.Install Latest Node.js
4.Git bash and VS code
5.Install Ollama Qwen 2.5 Coder 32B
6.Install node module and run repo.
7. change
OLLAMA_API_BASE_URL=http://127.0.0.1:11434
in .envExpected behavior
once you will try to select the Ollama model and give it a prompt it will give WARN Constants Failed to get LMStudio models: fetch failed.
Screen Recording / Screenshot
No response
Platform
Browser: Chrome
OS: Windows Server 2019 with desktop environment.
VM: Google Cloud
Provider Used
No response
Model Used
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: