Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The usage shown in the source code and docs is invalid and the completion_with_retry have argument bug in llms.py (tested 0.14) #44

Open
simongcc opened this issue May 6, 2024 · 1 comment

Comments

@simongcc
Copy link

simongcc commented May 6, 2024

According to the Document in LangChain - Cohere
The usage for LLM is

from langchain_cohere import Cohere

It is same as stated in the source code's comment.
However, if use accordingly, the error will be

# ImportError: cannot import name 'Cohere' from 'langchain_cohere' (/Users/user-cloaked/miniforge3/envs/langchain/lib/python3.11/site-packages/langchain_cohere/__init__.py)

According to the structure of the source code, the actual working method seems to be:

from langchain_cohere.llms import Cohere

Assume that this usage is the actual way.
There is a bug inside the llms.py

If calling the Cohere according to manual, should be something like this

model = Cohere(model="command", max_tokens=256, temperature=0.75)

However, it will generate error like this

# TypeError: langchain_cohere.llms.completion_with_retry() got multiple values for keyword argument 'model'

Because when running completion_with_retry(), self.model is provided instead of model
i.e. it didn't allow to specify any model in other words
I am not sure if it is intended to do so or just a copy and paste error, the async also have to same problem by reading the source code without running.

So the final working writing at the moment is

from langchain_cohere.llms import Cohere
model = Cohere(max_tokens=256, temperature=0.75)
# ...
@simongcc simongcc changed the title The usage shown in the source code and docs is invalid and the completion_with_retry have argument bug in llm.py (tested 0.14) The usage shown in the source code and docs is invalid and the completion_with_retry have argument bug in llms.py (tested 0.14) May 6, 2024
@julfr
Copy link

julfr commented Jun 5, 2024

I have the same problem. Are any news here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants