-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OLLAMA LLAMA 3.2 fails to run with JSON Encoding error #1514
Comments
Can you please try again with Python 3.11 or 3.10 |
Same issues to use llama3.2:1b$ interpreter --local Open Interpreter supports multiple local model providers. [?] Select a provider:
[?] Select a model:
Loading llama3.2:1b... Traceback (most recent call last): |
I got the same error for both 3.10 and 3.11 |
Yes. Same here with both Versions of 3.10 and 3.11 on Windows 11. Don´t know what to do after reinstall Python and OI. 😊 Open Interpreter supports multiple local model providers. [?] Select a provider:
[?] Select a model:
Loading qwen2.5-coder... Traceback (most recent call last): |
Please run If the issue persists, please share the output of |
Sadly wont Work :( I`d had previous Python 3.12 installed on the Local Maschine but i changed it to 3.11 and delete all dependencies of 3.12. -> Sandbox: I have installed it on a Sandbox. Sadly wont work. But i get an other Error Mesage. -> Local: (oi_venv) PS D:\OpenInterpreter> interpreter --version
"DEPRECATION: wget is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at pypa/pip#8559 Edit: |
What won't work? |
I've tried the approach suggested above, so far and have also recently reinstalled Open-Interpreter a few times. I've also reinstalled Python in an attempt to resolve the issue. After multiple attempts to install Open-Interpreter and starting the "Interpreter --local" just result in the same Error from above. I'm starting to suspect that there might be a configuration problem with my Windows 11 installation. Unfortunately, I'm not sure what this would entail or how to fix it. I don't want to reinstall Windows, if fixing the issue is an option, To be honest, I'm getting frustrated with the issues caused by Windows 11 again - it's not the first time I've encountered problems like this due to its quirks. Last time, it was related to PyTorch 😂. Guess how i fixed it. |
Loading llama3.2:3b... Traceback (most recent call last): |
same issue not working |
Add: |
Basically all Ollama models are failing to run. Some even load, but they all crash after inserting the prompt. |
See my comment above #1514 (comment) Will be fixed next release #1524 |
I've tried that option. It's still not working.
Great. Thanks! |
Thanks! it works! Adding these parameter is useful. |
i meet the same question JSON err,interpreter --local --no-llm_supports_functions it is useful,but if it can make difference as openai? |
there is a question that i must use interpreter --local --no-llm_supports_functions to add LLM when i use,as if computer forget the |
This will be fixed next update I believe |
How can I use "interpreter --local --no-llm_supports_functions" inside python code Start interactive sessioninterpreter.chat() |
|
thanks it worked
…On Tue, Dec 10, 2024 at 1:20 PM Anton Solbjørg ***@***.***> wrote:
interpreter.llm.supports_functions = False
—
Reply to this email directly, view it on GitHub
<#1514 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA3SS7DDIORNHSPXJFHXW2D2E3E6ZAVCNFSM6AAAAABQ6QMPRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZRGI4DMMRUGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
*Moshe Sharon*
*050 - 6562311*
|
Describe the bug
interpreter --local
Open Interpreter supports multiple local model providers.
[?] Select a provider:
[?] Select a model:
llama3.2
Downloading llama3.1...
pulling manifest
pulling 8eeb52dfb3bb... 100% ▕████████████████▏ 4.7 GB
pulling 948af2743fc7... 100% ▕████████████████▏ 1.5 KB
pulling 0ba8f0e314b4... 100% ▕████████████████▏ 12 KB
pulling 56bb8bd477a5... 100% ▕████████████████▏ 96 B
pulling 1a4c3c319823... 100% ▕████████████████▏ 485 B
verifying sha256 digest
writing manifest
success
Loading llama3.1...
Traceback (most recent call last):
File "/opt/anaconda3/bin/interpreter", line 8, in
sys.exit(main())
^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "", line 1, in
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/core.py", line 145, in local_setup
self = local_setup(self)
^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 86, in run
self.load()
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "/opt/anaconda3/lib/python3.12/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream
raise e
File "/opt/anaconda3/lib/python3.12/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
Reproduce
above command
Expected behavior
above
Screenshots
No response
Open Interpreter version
0.4.3
Python version
3.12.4
Operating System name and version
macOS 13
Additional context
No response
The text was updated successfully, but these errors were encountered: