-
Notifications
You must be signed in to change notification settings - Fork 396
Issues: ollama/ollama-python
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Exception: Attempted to call a sync iterator on an async stream.
#309
opened Nov 5, 2024 by
nlp4everyone
Slow Inference on LLAMA 3.1 405B using ollama.generate with Large Code Snippets on multi-H100 GPUs
#302
opened Oct 21, 2024 by
animeshj9
Unable to generate responses normally when invoking the fine-tuned model using Python.
#288
opened Sep 23, 2024 by
letdo1945
Tool calls are not properly returned when chat() is called with stream=True
#279
opened Sep 12, 2024 by
ggozad
Please expose the contents of the
_type
file to allow for better static type analysis
#274
opened Sep 10, 2024 by
johnch18
Inconsistent
prompt_eval_count
for Large Prompts in Ollama Python Library
#271
opened Sep 6, 2024 by
surajyadav91
'timed out waiting for llama runner to start' in ~6 minutes when trying to load large model
#246
opened Aug 8, 2024 by
alexander-potemkin
Ollama in combination with Mistral NeMo is making up weird questions on its own
#240
opened Jul 30, 2024 by
MauriceDroll
Previous Next
ProTip!
Updated in the last three days: updated:>2024-11-21.