Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WARN Constants Failed to get LMStudio models: fetch failed #721

Open
Arav-Shakya opened this issue Dec 14, 2024 · 15 comments
Open

WARN Constants Failed to get LMStudio models: fetch failed #721

Arav-Shakya opened this issue Dec 14, 2024 · 15 comments

Comments

@Arav-Shakya
Copy link

Describe the bug

I tried to run bolt.diy on my VM and it's giving me this WARN Constants Failed to get LMStudio models: fetch failed every time I give a prompt. I am use Ollama locally
Screenshot 2024-12-14 at 8 35 32 PM
Screenshot 2024-12-14 at 8 36 06 PM

Link to the Bolt URL that caused the error

http://localhost:5173/chat/4

Steps to reproduce

1.Create a VM on google cloud. ( Use L4 GPU, Machine type : g2-standard-16, 64gb memory )
2.Select windows server 2019
3.Install Latest Node.js
4.Git bash and VS code
5.Install Ollama Qwen 2.5 Coder 32B
6.Install node module and run repo.
7. change OLLAMA_API_BASE_URL=http://127.0.0.1:11434 in .env

Expected behavior

once you will try to select the Ollama model and give it a prompt it will give WARN Constants Failed to get LMStudio models: fetch failed.

Screen Recording / Screenshot

No response

Platform

Browser: Chrome
OS: Windows Server 2019 with desktop environment.
VM: Google Cloud

Provider Used

No response

Model Used

No response

Additional context

No response

@Mhinolv
Copy link

Mhinolv commented Dec 14, 2024

Thank you for submitting this, I am seeing the same thing.

Screenshot 2024-12-14 at 9 53 28 AM

@domforson
Copy link

I’m experiencing the same thing even though I use Gemini models. I’ve never used Ollama but I’m getting the same alert.

@dustinwloring1988
Copy link
Collaborator

@domforson @Mhinolv @Arav-Shakya I tried the latest version and turned LM Studio off in the settings menu and this fixed it.

@libbi3605
Copy link

yeah no chance to take full advantage of it if it dont support it. i was wondering if i run ollama in docker too might work. also not sure but i think it worked before and dont work now.. thanks anyone for advice...

@lamanweb
Copy link

change your browser, or clear cookies / data.

@Arav-Shakya
Copy link
Author

Arav-Shakya commented Dec 16, 2024

Thank you for submitting this, I am seeing the same thing.

Screenshot 2024-12-14 at 9 53 28 AM

I did find a solution to this: Go to the .env.example file change it to .env.local if you are using Ollama then change the following:

Before:
OLLAMA_API_BASE_URL=
After
OLLAMA_API_BASE_URL=http://127.0.0.1:11434

This one below will solve LMS issue
Before:
LMSTUDIO_API_BASE_URL =
After:
LMSTUDIO_API_BASE_URL=http:/127.0.0.1:1234

After these changes issues were resolved but I tried running Qwen 2.5 coder 32b model but I didn't got any response or any error.

pls see if it works for you or not.

@MishaNyaCopilot
Copy link

Changing .env doesnt work for me

@VictimOfPing
Copy link

Hi I have a problem when I go to use Ollama, when I go to send it a message it gives me this error: There was an error processing your request: An error occurred.

@roodfps
Copy link

roodfps commented Dec 17, 2024

i'm having the same issue, some one know how to solve it?

@Arav-Shakya
Copy link
Author

Provide more context with screenshot.

@thecodacus
Copy link
Collaborator

for ollama please use the UI to set the base url
and try again

from here
image

@VictimOfPing
Copy link

Provide more context with screenshot.

image

image

image

@thecodacus
Copy link
Collaborator

can you show the terminal screenshot ..
where you have put the pnpm run dev

@VictimOfPing
Copy link

can you show the terminal screenshot .. where you have put the pnpm run dev

is empty

image

@thecodacus
Copy link
Collaborator

can you try if this PR works for you
#816

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants