Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llama-3.1 70B]Open Interpreter's Preps did not complete after setting the model #1371

Open
mickitty0511 opened this issue Jul 30, 2024 · 12 comments · Fixed by #1400 · May be fixed by #1524
Open

[llama-3.1 70B]Open Interpreter's Preps did not complete after setting the model #1371

mickitty0511 opened this issue Jul 30, 2024 · 12 comments · Fixed by #1400 · May be fixed by #1524

Comments

@mickitty0511
Copy link

mickitty0511 commented Jul 30, 2024

Describe the bug

When reading your official doc about using ollama's module, I tried using llama 3.1 for open interpreter. However, errors were produced during the preps made after the module setup. I need a detailed resolution or explanation about what happened in my case. I hope some developers would reproduce this error and then tell me about this case.

Reproduce

Follow your official doc

Used this command

  • ollama run llama3.1
  • interpreter --model ollama/llama3.1

Then, open interpreter asked me if I would have new profile file.
I did answer n.

Then error is as follows.

[2024-07-30T03:56:01Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/llama3.1/resolve/main/tokenizer.json failed with fatal error
Traceback (most recent call last):

json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

Expected behavior

I suppose it should complete the preps based on what I read from your official docs.

Screenshots

No response

Open Interpreter version

0.3.4

Python version

3.11.5

Operating System name and version

Windows 11

Additional context

No response

@mickitty0511 mickitty0511 changed the title [llama-3.1 70B]Preps does not complete after setting the model [llama-3.1 70B]Open Interpreter's Preps did not complete after setting the model Jul 30, 2024
@uthpala1000
Copy link

got the same

@GuHugo95
Copy link

same too

@ViperGash
Copy link

same.

@GuHugo95
Copy link

It seems doesn't supported llama3.1

@GuHugo95
Copy link

Maybe you can run llama run llama3 and use interpreter --model ollama/llama3 to use

@CyanideByte
Copy link
Contributor

This PR should fix this issue.
#1400

@leafarilongamor
Copy link

I'm still facing this issue on Windows 11, even running the latest OI, Ollama and Llama 3.1 versions.

PS C:\Users\User> interpreter --version
Open Interpreter 0.3.7 The Beginning (Ty and Victor)
PS C:\Users\User> ollama --version
ollama version is 0.3.6
PS C:\Users\User> interpreter --model ollama/llama3.1

▌ Model set to ollama/llama3.1

Loading llama3.1...

Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\Scripts\interpreter.exe\__main__.py", line 7, in <module>
    sys.exit(main())
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 586, in main
    start_terminal_interface(interpreter)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 540, in start_terminal_interface
    validate_llm_settings(
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 110, in validate_llm_settings
    interpreter.llm.load()
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 358, in load
    self.interpreter.computer.ai.chat("ping")
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\computer\ai\ai.py", line 130, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 291, in run
    yield from run_tool_calling_llm(self, params)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 177, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 420, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 400, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\litellm\llms\ollama.py", line 370, in ollama_completion_stream
    raise e
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\litellm\llms\ollama.py", line 348, in ollama_completion_stream
    function_call = json.loads(response_content)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
PS C:\Users\User>

I'm not sure about what I'm doing wrong.

@MikeBirdTech
Copy link
Contributor

@mickitty0511 @leafarilongamor

I brought this up internally and it's being worked on! Thanks for raising the issue

@UltraInstinct0x
Copy link

Hi @MikeBirdTech,
Same issue is on macOS also.

goku@192 ~ % interpreter --version
Open Interpreter 0.3.7 The Beginning (Ty and Victor)
goku@192 ~ % ollama -v
ollama version is 0.3.6

I am on macOS Version 15.0 Beta (24A5309e) if that makes any difference for you.
Best!

@wa008
Copy link

wa008 commented Nov 4, 2024

same issue

interpreter --version
Open Interpreter 0.4.3 Developer Preview

ollama -v
ollama version is 0.3.13

@omarnahdi
Copy link

@MikeBirdTech @leafarilongamor Did ya'll got the fix? I'm trying to load llama 3.2 and I'm getting the same error

@CyanideByte
Copy link
Contributor

CyanideByte commented Nov 7, 2024

This will be fixed with the merge of this PR: #1524

If you want to try it early you can install it like this
pip install --upgrade --force-reinstall git+https://github.com/CyanideByte/open-interpreter.git@local-fixes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants