Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API Request Fails with LM Studio's LLM (CORS Enabled) and Unexpected Ollama Errors #821

Closed
miracle777 opened this issue Dec 18, 2024 · 3 comments

Comments

@miracle777
Copy link

Describe the bug

I encountered the following issues while using LM Studio and would like some assistance. I am running bolt.diy directly on WSL without using Docker.

CORS設定を有効化
As shown in the attached image, I enabled the "CORS" option in LM Studio's server settings. After doing so, I was able to select the LLM (Large Language Model) from the dropdown menu.

APIリクエストエラー
When I selected LM Studio's LLM and attempted to generate code, I received the following error message:
`
There was an error processing your request: An error occurred.

`
Ollama関連の警告
Despite not using the Ollama server, I see these warning messages in the terminal:

WARN Constants Failed to get Ollama models: fetch failed
WARN Constants Failed to get Ollama models: fetch failed


Environment Details:
WSL (Windows Subsystem for Linux): Running bolt.diy directly (without Docker).
I would appreciate any guidance on:

Why the API request to LM Studio's server fails.
Why Ollama warnings appear even though I am not using it.
Thank you for your support!

Link to the Bolt URL that caused the error

http://localhost:5173/chat/10

Steps to reproduce

I entered the prompt, but I get an error

Expected behavior

I noticed in the screenshot of the debug info that the URL for the chat API in LMStudio seemed different from what LMStudio shows.

Screen Recording / Screenshot

スクリーンショット 2024-12-18 215035
スクリーンショット 2024-12-18 215051
スクリーンショット 2024-12-18 215143
image

Platform

  • OS: [ Linux]
  • Browser: [Chrome]
  • Version: [eb6d435(v0.0.3) - stable]

Provider Used

No response

Model Used

No response

Additional context

I am getting the following error on the LMStudio server.
2024-12-18 21:50:37 [ERROR] Unexpected endpoint or method. (GET /api/health). Returning 200 anyway

I think the URL that bolt.diy requests to LMStudio's API is different? I thought so.

2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234
2024-12-18 20:52:33 [INFO]
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] Supported endpoints:
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/embeddings

@dustinwloring1988
Copy link
Collaborator

in lm studio try to enable: Serve on Local Network
if that dose not work than also try the local ip address of the computer that LM Stuido is running on
In this video I used docker for bolt.diy and ollama but it might help: https://youtu.be/TMvA10zwTbI

@miracle777
Copy link
Author

miracle777 commented Dec 18, 2024

Thank you very much.
I have turned on serving on the local network of the LMStudio server.
But the error did not improve.

With the Docker method, I was able to use the Model of LMStudio.

Thank you very much for teaching me how to do this.
スクリーンショット 2024-12-18 235018

@miracle777
Copy link
Author

Sorry.
I re-cloned the repository this morning.
And I also turned on serving the LM Studio URL on the local network.
I set this URL in env.local and started it up, and it worked with WSL.
I was able to use it with WSL without using Docker.
スクリーンショット 2024-12-19 104005

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants