Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OLLAMA LLAMA 3.2 fails to run with JSON Encoding error #1514

Open
meetr1912 opened this issue Oct 31, 2024 · 21 comments · May be fixed by #1524
Open

OLLAMA LLAMA 3.2 fails to run with JSON Encoding error #1514

meetr1912 opened this issue Oct 31, 2024 · 21 comments · May be fixed by #1524

Comments

@meetr1912
Copy link

Describe the bug

interpreter --local

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama
Llamafile
LM Studio
Jan

[?] Select a model:
llama3.2

↓ Download llama3.1
↓ Download phi3
↓ Download mistral-nemo
↓ Download gemma2
↓ Download codestral
Browse Models ↗

Downloading llama3.1...

pulling manifest
pulling 8eeb52dfb3bb... 100% ▕████████████████▏ 4.7 GB
pulling 948af2743fc7... 100% ▕████████████████▏ 1.5 KB
pulling 0ba8f0e314b4... 100% ▕████████████████▏ 12 KB
pulling 56bb8bd477a5... 100% ▕████████████████▏ 96 B
pulling 1a4c3c319823... 100% ▕████████████████▏ 485 B
verifying sha256 digest
writing manifest
success
Loading llama3.1...

Traceback (most recent call last):
File "/opt/anaconda3/bin/interpreter", line 8, in
sys.exit(main())
^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "", line 1, in
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/core.py", line 145, in local_setup
self = local_setup(self)
^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 86, in run
self.load()
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "/opt/anaconda3/lib/python3.12/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream
raise e
File "/opt/anaconda3/lib/python3.12/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

Reproduce

above command

Expected behavior

above

Screenshots

No response

Open Interpreter version

0.4.3

Python version

3.12.4

Operating System name and version

macOS 13

Additional context

No response

@MikeBirdTech
Copy link
Contributor

Can you please try again with Python 3.11 or 3.10

https://docs.openinterpreter.com/getting-started/setup

@niehu2018
Copy link

Same issues to use llama3.2:1b

$ interpreter --local

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama
Llamafile
LM Studio
Jan

[?] Select a model:

llama3.2:1b
llama3.2:3b
llava-llama3
llama3.1:8b
phi3:3.8b
nomic-embed-text
qwen2:7b
↓ Download llama3.1
↓ Download phi3
↓ Download mistral-nemo
↓ Download gemma2
↓ Download codestral
Browse Models ↗

Loading llama3.2:1b...

Traceback (most recent call last):
File "/Users/niehu/miniforge3/envs/open_interpreter/bin/interpreter", line 8, in
sys.exit(main())
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "", line 1, in
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/core.py", line 145, in local_setup
self = local_setup(self)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 86, in run
self.load()
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream
raise e
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream
function_call = json.loads(response_content)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

@vrijmetse
Copy link

I got the same error for both 3.10 and 3.11

@Grunkah
Copy link

Grunkah commented Nov 3, 2024

Yes. Same here with both Versions of 3.10 and 3.11 on Windows 11. Don´t know what to do after reinstall Python and OI. 😊

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama
Llamafile
LM Studio
Jan

[?] Select a model:
llama3.2:1b
llama3-groq-tool-use
llama3.1:8b
llama3.1
llama3.2

qwen2.5-coder
deepseek-coder-v2
mistral
nemotron-mini
qwen2.5:7b
starcoder2:3b
gemma2
codegemma

Loading qwen2.5-coder...

Traceback (most recent call last):
File "", line 198, in run_module_as_main
File "", line 88, in run_code
File "D:\OpenInterpreter\oi_venv\Scripts\interpreter.exe_main
.py", line 7, in
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
^^^^^^^^
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "", line 1, in
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup
self = local_setup(self)
^^^^^^^^^^^^^^^^^
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run
self.load()
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\litellm\llms\ollama.py", line 428, in ollama_completion_stream
raise e
File "D:\OpenInterpreter\oi_venv\Lib\site-packages\litellm\llms\ollama.py", line 406, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Grunkah\AppData\Local\Programs\Python\Python311\Lib\json_init
.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Grunkah\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Grunkah\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
(oi_venv) PS D:\OpenInterpreter>

@MikeBirdTech
Copy link
Contributor

MikeBirdTech commented Nov 4, 2024

Please run pip install 'open-interpreter[local]'

If the issue persists, please share the output of interpreter --version and ollama --version

@Grunkah
Copy link

Grunkah commented Nov 4, 2024

Please run pip install open-interpreter[local]

If the issue persists, please share the output of interpreter --version and ollama --version

Sadly wont Work :(

I`d had previous Python 3.12 installed on the Local Maschine but i changed it to 3.11 and delete all dependencies of 3.12.

-> Sandbox:

I have installed it on a Sandbox. Sadly wont work. But i get an other Error Mesage.

-> Local:

(oi_venv) PS D:\OpenInterpreter> interpreter --version
Open Interpreter 0.4.3 Developer Preview
(oi_venv) PS D:\OpenInterpreter> ollama --version
ollama version is 0.3.14
(oi_venv) PS D:\OpenInterpreter> python --version
Python 3.11.0

  • While Installation:

"DEPRECATION: wget is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at pypa/pip#8559
Running setup.py install for wget ... done
DEPRECATION: pyperclip is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at pypa/pip#8559"

Edit:
After new Installment with python -m pip install --upgrade pip, Error above won`t occur. Same Error if i started the --local.

@MikeBirdTech
Copy link
Contributor

@Grunkah

Sadly wont Work :(

What won't work?

@Grunkah
Copy link

Grunkah commented Nov 4, 2024

@Grunkah

Sadly wont Work :(

What won't work?

I've tried the approach suggested above, so far and have also recently reinstalled Open-Interpreter a few times. I've also reinstalled Python in an attempt to resolve the issue.

After multiple attempts to install Open-Interpreter and starting the "Interpreter --local" just result in the same Error from above.

I'm starting to suspect that there might be a configuration problem with my Windows 11 installation. Unfortunately, I'm not sure what this would entail or how to fix it. I don't want to reinstall Windows, if fixing the issue is an option,

To be honest, I'm getting frustrated with the issues caused by Windows 11 again - it's not the first time I've encountered problems like this due to its quirks. Last time, it was related to PyTorch 😂. Guess how i fixed it.

@tysonchamp
Copy link

Loading llama3.2:3b...

Traceback (most recent call last):
File "/home/tyson/open-interpreter/.env/bin/interpreter", line 8, in
sys.exit(main())
^^^^^^
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
^^^^^^^^
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "", line 1, in
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/core.py", line 145, in local_setup
self = local_setup(self)
^^^^^^^^^^^^^^^^^
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 86, in run
self.load()
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream
raise e
File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
(.env) tyson@tyson-b760mds3h:/open-interpreter$ interpreter --version
Open Interpreter 0.4.3 Developer Preview
(.env) tyson@tyson-b760mds3h:
/open-interpreter$ ollama --version
ollama version is 0.4.0

@tysonchamp
Copy link

same issue not working

@Notnaton
Copy link
Collaborator

Notnaton commented Nov 8, 2024

Add:
--no-llm_supports_functions
When launching interpreter

@telmob
Copy link

telmob commented Nov 12, 2024

Basically all Ollama models are failing to run. Some even load, but they all crash after inserting the prompt.

@Notnaton
Copy link
Collaborator

Notnaton commented Nov 12, 2024

Basically all Ollama models are failing to run. Some even load, but they all crash after inserting the prompt.

See my comment above #1514 (comment)

Will be fixed next release #1524

@telmob
Copy link

telmob commented Nov 12, 2024

See my comment above #1514 (comment)

I've tried that option. It's still not working.

Will be fixed next release #1524

Great. Thanks!

@bg9cxn
Copy link

bg9cxn commented Nov 16, 2024

Add: --no-llm_supports_functions When launching interpreter

Thanks! it works! Adding these parameter is useful.
command:
interpreter --local --no-llm_supports_functions
select "ollama"and the model that you want

@1caiji23
Copy link

i meet the same question JSON err,interpreter --local --no-llm_supports_functions it is useful,but if it can make difference as openai?

@1caiji23
Copy link

1caiji23 commented Dec 2, 2024

there is a question that i must use interpreter --local --no-llm_supports_functions to add LLM when i use,as if computer forget the
config

@Notnaton
Copy link
Collaborator

Notnaton commented Dec 5, 2024

This will be fixed next update I believe

@mordsm
Copy link

mordsm commented Dec 9, 2024

How can I use "interpreter --local --no-llm_supports_functions" inside python code
currently
interpreter.llm.model = "ollama/llama3.2" # Specific configuration may vary
interpreter.llm.api_base = "http://localhost:11434" # Typical Ollama local endpoint
#interpreter.llm.api_key = "your_api_key_if_required"

Start interactive session

interpreter.chat()

@Notnaton
Copy link
Collaborator

interpreter.llm.supports_functions = False

@mordsm
Copy link

mordsm commented Dec 10, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.