-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There was an error processing your request: No details were returned #484
Comments
I am running into the same issue with LM Studio. When refreshing the web browser for the bolt.new chat interface, there is an interaction with LM Studio server - 2024-11-30 12:24:12 [INFO] Received GET request to /v1/models with body: {} Any help would be greatly appreciated! |
I'm working on a fix. I guess we need to push our ollama model into our dashboard and obtain the api key to get it working. I tried with Gemini and it's working but it's total trash. |
try running ollama in docker or use this url for ollama 'http://host.docker.internal:11434' |
here is a docker compose for use with one NVIDIA gpu: name: ollama
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities:
- gpu
volumes:
- ollama:/root/.ollama
ports:
- 11434:11434
container_name: ollama
image: ollama/ollama
volumes:
ollama:
external: true
name: ollama |
You will then have to exec in and download any models you want. Also if you are still getting errors after trying both take image of your terminal so i can see if there are any errors |
@azerxafro what GPU do you have as that is a large model and a large context |
I use a garbage ( gt 730 ) |
thats only 2GB of VRAM I believe. If that is the case it is not enough for that model. that could be why the other 2 options are still worth a try though |
facing same issue but with this model codellama:7b in windows ran locally with pnpm |
try it in docker along with oTToDev in docker and see if you get different results this will help trouble shooting |
Any recommendations for LM Studio? |
@AnirudhG07 just clone it then once in the folder run 'npm i .' then 'npm run dev' Again this worked fine for me. |
Thanks ill try Click to see:
|
@azerxafro you got any fix for your issue? |
@azerxafro I would say its your api key. Do you have money on your account? If not throw $5 on it and it should work. |
@dustinwloring1988 lol im not that broke. How do i find the api key of ollama qwen2.5? |
@azerxafro there is no api for ollama you need to run it local or host it in the cloud |
@dustinwloring1988 he's saying that he ran the ollama in local and also the bolt as well in local, but the model which he used and which i used as well not working qwen2.5 and codellama:7b even added the model file for the ollama and executed the given command in the readme of this repo , still getting the same error as There was an error processing your request: No details were returned i want to know if there's anyone who knows how to fix this issue or if someone used ollama models before ! |
@Cha11enger these fixes have been merged, let me know if it works for you. Also i use qwen2.5 32B. Try this prompt 'make a vite tic tac toe game then install all packages before running it in the webcontainer' |
@Cha11enger if you have a discord we can sidebar this convo and post the outcome here if you rather |
i have fixed the issue @dustinwloring1988 @azerxafro its been already closed in another issue , but not updated in code , some one need to add dynamic model code , here's the reference link of the solution check and update the code app\utils\constants.ts |
thats works as a temporary fix for you but a permanent fix needs to be found if a variable is not being used correctly. if you find one let me know I will look into this once i get back home |
Have spent hours in the past couple of days trying to make the docker setup work for Ollama but it simply won't access the Ollama host IP from within the Docker container. So I tried this |
Thanks! I just figured my API was broke. Your command + API Kaching $ worked! |
Is this fixed if not please list you environment. ( OS, how you are running this fork, how you are running ollama, any and all commands you ran after cloning it )? |
I'm not even trying to use ollama although I have it setup and working - running locally I get the same error as indicated by issue 297:
This logs to the console for every model with the following
|
please list you environment. ( OS, how you are running this fork, how you are running ollama, any and all commands you ran after cloning it )? |
OS: MacOS 15.1 (24B83), Running locally using VS Code After Git pull I ran pnpm install then pnpm run dev Env file configured with keys for OpenRouter and OpenAI Previously able to run with the same setup on a different machine older MacBookPro - intel - but since installing on M1 Mini some time after a merge on the 25th November every time I try to run a prompt I get the errors shown above. This might not be an error with Ollama specifically as the error starting with 'Warning: Encountered two children with the same key, anthropic/claude-3.5-sonnet. Keys should be unique...' hits the console for every provider but the VSCode terminal only reports the Ollama error I quoted above. |
Im sorry I do not have an apple to debug on maybe someone else can help, maybe try http://host.docker.internal for the ollama url in the .env.local file |
Did this or the latest version help? |
Describe the bug
I have done everything as mentioned in installation.
I even tried on a mac and also Windows 11.
All dependencies are installed but im getting this error:
There was an error processing your request: No details were returned
Link to the Bolt URL that caused the error
http://localhost:5173/
Steps to reproduce
1.Boot your desktop
2.Open powershell
3. Cd Desktop
4. git clone https://github.com/coleam00/bolt.new-any-llm.git
5. Download ollama & qwen2.5-coder:32b
6. create modelfile in vscode
FROM qwen2.5-coder:32b
PARAMETER num_ctx 32768
Development build
npm run dockerbuild
Production build
npm run dockerbuild:prod
Expected behavior
There was an error processing your request: No details were returned
Screen Recording / Screenshot
Provided in description
Platform
Additional context
No response
The text was updated successfully, but these errors were encountered: