You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is an issue with '\n' not working properly in llama3. When passing '\n' through tokenizer.encode, it outputs the token ID 198, but it does not terminate the sentence generation appropriately and continues generating subsequent text. eos_token_id = base_model.tokenizer.encode("\n", bos=False, eos=False)[-1]
In contrast, using other strings like 'Q' works correctly. Additionally, testing with llama2 shows that all strings, including '\n', work as expected.
Could you please look into this issue?
The text was updated successfully, but these errors were encountered:
Yes, there is a slight difference in tokenization with Llama-3 compared to other models, e.g., \n\n is a different token from \n. To use llama-3, maybe you want to play with the tokenizer and investigate what's the really desired eos_token in your use case.
There is an issue with '\n' not working properly in llama3. When passing '\n' through tokenizer.encode, it outputs the token ID 198, but it does not terminate the sentence generation appropriately and continues generating subsequent text.
eos_token_id = base_model.tokenizer.encode("\n", bos=False, eos=False)[-1]
In contrast, using other strings like 'Q' works correctly. Additionally, testing with llama2 shows that all strings, including '\n', work as expected.
Could you please look into this issue?
The text was updated successfully, but these errors were encountered: