Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Grammar][Fix] Pass in stop tokens to xgrammar TokenizerInfo #642

Merged
merged 1 commit into from
Nov 27, 2024

Conversation

CharlieFRuan
Copy link
Contributor

Prior to this PR, using models such as SmolLM, which has <|endoftext|> as an unk token and <|im_end|> as a stop token, runs into issues with XGrammar. This is because XGrammar has a builtin set of stop tokens, which includes <|endoftext|> but not <|im_end|>. This results in, at the end of a structured generation, <|endoftext|> is forced to be generated (as it is the only stop token recognized), but since it is not an actual stop token, the generation of the model does not stop.

This PR explicitly passes in the stop tokens (recognized from mlc-chat-config.json) to createTokenizerInfo() so we do not use the built-in set of stop tokens. In the case above, <|im_end|> will be the only stop token used by XGrammar, fixing the issue. It achieves a similar goal to XGrammar's PR mlc-ai/xgrammar#96

@CharlieFRuan CharlieFRuan merged commit e98369f into mlc-ai:main Nov 27, 2024
1 check passed
@CharlieFRuan CharlieFRuan deleted the fix-1127-grammar-stop branch November 27, 2024 18:53
CharlieFRuan added a commit that referenced this pull request Nov 27, 2024
### Change

- The only change is #642
  - Fixes structured generation for models like SmolLM2

### TVMjs
- No change, version `0.18.0-dev2` just like 0.2.71
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant