You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Although I set the stream flag to false, I still don't get the whole response in one package (done = False in the first received package).
The other problem is that sometimes, even when the package is final (done=true), I don't see all the expected additional information there for example prompt_eval_count is missing.
Am I doing something wrong here?
The text was updated successfully, but these errors were encountered:
I checked the documentation several times and I'm using the generate function in this way:
options = {
"temperature": 0.2,
"top_p": 0.8,
"seed": 42,
"num_predict": 300,
}
ollama.generate(model=model_name, prompt=prompt, stream=False, options=options, system=system_message, format="json")
Although I set the stream flag to false, I still don't get the whole response in one package (done = False in the first received package).
The other problem is that sometimes, even when the package is final (done=true), I don't see all the expected additional information there for example prompt_eval_count is missing.
Am I doing something wrong here?
The text was updated successfully, but these errors were encountered: