-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have some questions #16
Comments
Hi! I have recently checked and the code below works with
result:
Note that the quantized version of the model may require special handling (see the Colab). The non-instruction tuned version of the model (LongLLaMA-Code 7B) may have problems with answering questions (it may for example answer with a bunch of new lines). If you are loading the model as LLaMA, make sure to use an up-to-date version of the transformers library (as the old one ignores the parameter |
Could you give me contact to you? I copied code, moved model and input both to GPU.And my results are some lisp without any sense...
The text was updated successfully, but these errors were encountered: