Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There was an error processing your request: No details were returned #484

Open
azerxafro opened this issue Nov 30, 2024 · 32 comments
Open
Labels
question Further information is requested

Comments

@azerxafro
Copy link

Describe the bug

I have done everything as mentioned in installation.
I even tried on a mac and also Windows 11.
All dependencies are installed but im getting this error:
There was an error processing your request: No details were returned
Screenshot 2024-11-30 205102

Link to the Bolt URL that caused the error

http://localhost:5173/

Steps to reproduce

1.Boot your desktop
2.Open powershell
3. Cd Desktop
4. git clone https://github.com/coleam00/bolt.new-any-llm.git
5. Download ollama & qwen2.5-coder:32b
6. create modelfile in vscode

FROM qwen2.5-coder:32b
PARAMETER num_ctx 32768

  1. Development build

npm run dockerbuild

Production build

npm run dockerbuild:prod

Expected behavior

There was an error processing your request: No details were returned

Screen Recording / Screenshot

Provided in description

Platform

  • OS: Windows 11
  • Browser: Chrome
  • Version: Version 131.0.6778.86 (Official Build) (64-bit)

Additional context

No response

@azerxafro
Copy link
Author

Screenshot 2024-11-30 205247
The error im getting

@CoderCommander
Copy link

I am running into the same issue with LM Studio.

When refreshing the web browser for the bolt.new chat interface, there is an interaction with LM Studio server -

2024-11-30 12:24:12 [INFO] Received GET request to /v1/models with body: {}
2024-11-30 12:24:12 [INFO]
Returning {
"data": [
{
"id": "qwen2.5-coder-32b-instruct",
"object": "model",
"owned_by": "organization_owner"
},
{
"id": "text-embedding-nomic-embed-text-v1.5",
"object": "model",
"owned_by": "organization_owner"
}
],
"object": "list"
}

Any help would be greatly appreciated!

@azerxafro
Copy link
Author

I'm working on a fix. I guess we need to push our ollama model into our dashboard and obtain the api key to get it working. I tried with Gemini and it's working but it's total trash.

@dustinwloring1988
Copy link
Collaborator

try running ollama in docker or use this url for ollama 'http://host.docker.internal:11434'

@dustinwloring1988
Copy link
Collaborator

dustinwloring1988 commented Nov 30, 2024

here is a docker compose for use with one NVIDIA gpu:

name: ollama
services:
    ollama:
        deploy:
            resources:
                reservations:
                    devices:
                        - driver: nvidia
                          count: all
                          capabilities:
                              - gpu
        volumes:
            - ollama:/root/.ollama
        ports:
            - 11434:11434
        container_name: ollama
        image: ollama/ollama
volumes:
    ollama:
        external: true
        name: ollama

@dustinwloring1988
Copy link
Collaborator

dustinwloring1988 commented Nov 30, 2024

You will then have to exec in and download any models you want. Also if you are still getting errors after trying both take image of your terminal so i can see if there are any errors

@dustinwloring1988
Copy link
Collaborator

@azerxafro what GPU do you have as that is a large model and a large context

@azerxafro
Copy link
Author

I use a garbage ( gt 730 )

@dustinwloring1988
Copy link
Collaborator

thats only 2GB of VRAM I believe. If that is the case it is not enough for that model. that could be why the other 2 options are still worth a try though

@Cha11enger
Copy link

facing same issue but with this model codellama:7b in windows ran locally with pnpm

@dustinwloring1988
Copy link
Collaborator

try it in docker along with oTToDev in docker and see if you get different results this will help trouble shooting

@CoderCommander
Copy link

Any recommendations for LM Studio?

@AnirudhG07
Copy link

Same issue for gpt-4o.
image

I am not interesting in using docker. Any thing I can do???

@dustinwloring1988
Copy link
Collaborator

dustinwloring1988 commented Dec 1, 2024

@AnirudhG07 just clone it then once in the folder run 'npm i .' then 'npm run dev'

Again this worked fine for me.

@AnirudhG07
Copy link

AnirudhG07 commented Dec 1, 2024

Thanks ill try
Edit: @dustinwloring1988 it is still showing the same thing. I have my API key inside the .env.local and in the localhost. yet it's not working.
Here is the terminal output

Click to see:
  requestBodyValues: {
    model: 'gpt-4o',
    logit_bias: undefined,
    logprobs: undefined,
    top_logprobs: undefined,
    user: undefined,
    parallel_tool_calls: undefined,
    max_tokens: 8000,
    temperature: 0,
    top_p: undefined,
    frequency_penalty: undefined,
    presence_penalty: undefined,
    stop: undefined,
    seed: undefined,
    max_completion_tokens: undefined,
    store: undefined,
    metadata: undefined,
    response_format: undefined,
    messages: [ [Object], [Object] ],
    tools: undefined,
    tool_choice: undefined,
    stream: true,
    stream_options: undefined
  },
  statusCode: 404,
  responseHeaders: {
    'alt-svc': 'h3=":443"; ma=86400',
    'cf-cache-status': 'DYNAMIC',
    'cf-ray': '8eb25bf7c8609367-MAA',
    connection: 'keep-alive',
    'content-encoding': 'br',
    'content-type': 'application/json; charset=utf-8',
    date: 'Sun, 01 Dec 2024 10:23:00 GMT',
    server: 'cloudflare',
    'set-cookie': '_cfuvid=t8bDFZGi2DZDKIuBm0ITePnKlJz1iKfHxtomfPVWQ_k-1733048580583-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
    'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',
    'transfer-encoding': 'chunked',
    vary: 'Origin',
    'x-content-type-options': 'nosniff',
    'x-request-id': 'req_5433135e6c12c1fbc7b7fc0d49ba9d4c'
  },
  responseBody: '{\n' +
    '    "error": {\n' +
    '        "message": "The model `gpt-4o` does not exist or you do not have access to it.",\n' +
    '        "type": "invalid_request_error",\n' +
    '        "param": null,\n' +
    '        "code": "model_not_found"\n' +
    '    }\n' +
    '}\n',
  isRetryable: false,
  data: {
    error: {
      message: 'The model `gpt-4o` does not exist or you do not have access to it.',
      type: 'invalid_request_error',
      param: null,
      code: 'model_not_found'
    }
  },
  [Symbol(vercel.ai.error)]: true,
  [Symbol(vercel.ai.error.AI_APICallError)]: true

@Cha11enger
Copy link

@azerxafro you got any fix for your issue?

@dustinwloring1988
Copy link
Collaborator

@azerxafro I would say its your api key. Do you have money on your account? If not throw $5 on it and it should work.

@azerxafro
Copy link
Author

@dustinwloring1988 lol im not that broke. How do i find the api key of ollama qwen2.5?

@dustinwloring1988
Copy link
Collaborator

@azerxafro there is no api for ollama you need to run it local or host it in the cloud

@Cha11enger
Copy link

@dustinwloring1988 he's saying that he ran the ollama in local and also the bolt as well in local, but the model which he used and which i used as well not working qwen2.5 and codellama:7b even added the model file for the ollama and executed the given command in the readme of this repo , still getting the same error as There was an error processing your request: No details were returned i want to know if there's anyone who knows how to fix this issue or if someone used ollama models before !

@dustinwloring1988
Copy link
Collaborator

dustinwloring1988 commented Dec 1, 2024

@Cha11enger these fixes have been merged, let me know if it works for you. Also i use qwen2.5 32B. Try this prompt 'make a vite tic tac toe game then install all packages before running it in the webcontainer'

@dustinwloring1988
Copy link
Collaborator

@Cha11enger if you have a discord we can sidebar this convo and post the outcome here if you rather

@Cha11enger
Copy link

i have fixed the issue @dustinwloring1988 @azerxafro its been already closed in another issue , but not updated in code , some one need to add dynamic model code , here's the reference link of the solution

#451 (comment)

check and update the code

app\utils\constants.ts
export const DEFAULT_MODEL = 'qwen2.5-coder:3b'; --> replace with the model name your using and its working fine now.
image

@dustinwloring1988
Copy link
Collaborator

thats works as a temporary fix for you but a permanent fix needs to be found if a variable is not being used correctly. if you find one let me know I will look into this once i get back home

@tailagency
Copy link

@AnirudhG07 just clone it then once in the folder run 'npm i .' then 'npm run dev'

Again this worked fine for me.

Have spent hours in the past couple of days trying to make the docker setup work for Ollama but it simply won't access the Ollama host IP from within the Docker container. So I tried this npm i && npm run dev command and it worked!

@AnirudhG07
Copy link

@AnirudhG07 just clone it then once in the folder run 'npm i .' then 'npm run dev'
Again this worked fine for me.

Have spent hours in the past couple of days trying to make the docker setup work for Ollama but it simply won't access the Ollama host IP from within the Docker container. So I tried this npm i && npm run dev command and it worked!

Thanks! I just figured my API was broke. Your command + API Kaching $ worked!

@dustinwloring1988
Copy link
Collaborator

dustinwloring1988 commented Dec 2, 2024

Is this fixed if not please list you environment. ( OS, how you are running this fork, how you are running ollama, any and all commands you ran after cloning it )?

@dustinwloring1988 dustinwloring1988 added the question Further information is requested label Dec 2, 2024
@dvmac00
Copy link

dvmac00 commented Dec 2, 2024

I'm not even trying to use ollama although I have it setup and working - running locally I get the same error as indicated by issue 297:

Error getting Ollama models: TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11731:11) at processTicksAndRejections (node:internal/process/task_queues:95:5) at Object.getOllamaModels [as getDynamicModels] (/Users/dvh/Code/bolt.new-any-llm/app/utils/constants.ts:294:22) at async Promise.all (index 0) at Module.initializeModelList (/Users/dvh/Code/bolt.new-any-llm/app/utils/constants.ts:365:9) at handleRequest (/Users/dvh/Code/bolt.new-any-llm/app/entry.server.tsx:30:3) at handleDocumentRequest (/Users/dvh/Code/bolt.new-any-llm/node_modules/.pnpm/@[email protected][email protected]/node_modules/@remix-run/server-runtime/dist/server.js:340:12) at requestHandler (/Users/dvh/Code/bolt.new-any-llm/node_modules/.pnpm/@[email protected][email protected]/node_modules/@remix-run/server-runtime/dist/server.js:160:18) at /Users/dvh/Code/bolt.new-any-llm/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected]_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 { cause: Error: connect ECONNREFUSED ::1:11434 at __node_internal_captureLargerStackTrace (node:internal/errors:496:5) at __node_internal_exceptionWithHostPort (node:internal/errors:671:12) at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:128:17) { errno: -61, code: 'ECONNREFUSED', syscall: 'connect', address: '::1', port: 11434 }

This logs to the console for every model with the following

Warning: Encountered two children with the same key, anthropic/claude-3.5-sonnet. Keys should be unique so that components maintain their identity across updates. Non-unique keys may cause children to be duplicated and/or omitted — the behavior is unsupported and could change in a future version.

@dustinwloring1988
Copy link
Collaborator

please list you environment. ( OS, how you are running this fork, how you are running ollama, any and all commands you ran after cloning it )?

@dvmac00
Copy link

dvmac00 commented Dec 2, 2024

please list you environment. ( OS, how you are running this fork, how you are running ollama, any and all commands you ran after cloning it )?

OS: MacOS 15.1 (24B83),

Running locally using VS Code
Ollama installed but not attempting to use Ollama

After Git pull I ran pnpm install then pnpm run dev

Env file configured with keys for OpenRouter and OpenAI

Previously able to run with the same setup on a different machine older MacBookPro - intel - but since installing on M1 Mini some time after a merge on the 25th November every time I try to run a prompt I get the errors shown above.

This might not be an error with Ollama specifically as the error starting with 'Warning: Encountered two children with the same key, anthropic/claude-3.5-sonnet. Keys should be unique...' hits the console for every provider but the VSCode terminal only reports the Ollama error I quoted above.

@dustinwloring1988
Copy link
Collaborator

Im sorry I do not have an apple to debug on maybe someone else can help, maybe try http://host.docker.internal for the ollama url in the .env.local file

@dustinwloring1988
Copy link
Collaborator

Did this or the latest version help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants