-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
### Instruction: ### Response: #214
Comments
As far as I understand those two are hardcoded keywords that make the alpaca a chat from llama which is a text completion model. code of "### Instruction" Yet in its core alpaca is a completion model, it might also guess the next ### Instructions, if it thinks the previous instruction was completed successfully. This doesn't come from the code but as part of the text completion by the model |
Yeah sure.. but how to we stop that from happening? |
It's possible to avoid token associated with "##" |
Very often the response has ### Instruction: and ### Response: sections
often even after a good response.
Example of the system going nuts :
The text was updated successfully, but these errors were encountered: