Skip to content

Commit

Permalink
[Grammar][Fix] Pass in stop tokens to xgrammar TokenizerInfo (#642)
Browse files Browse the repository at this point in the history
Prior to this PR, using models such as SmolLM, which has `<|endoftext|>`
as an unk token and `<|im_end|>` as a stop token, runs into issues with
XGrammar. This is because XGrammar has a builtin set of stop tokens,
which includes `<|endoftext|>` but not `<|im_end|>`. This results in, at
the end of a structured generation, `<|endoftext|>` is forced to be
generated (as it is the only stop token recognized), but since it is not
an actual stop token, the generation of the model does not stop.

This PR explicitly passes in the stop tokens (recognized from
`mlc-chat-config.json`) to `createTokenizerInfo()` so we do not use the
built-in set of stop tokens. In the case above, `<|im_end|>` will be the
only stop token used by XGrammar, fixing the issue.
  • Loading branch information
CharlieFRuan authored Nov 27, 2024
1 parent 082f04e commit e98369f
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions src/llm_chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -554,6 +554,7 @@ export class LLMChatPipeline {
this.token_postproc_method,
this.prepend_space_in_encode,
this.fullVocabSize,
this.stopTokens,
);
this.grammarCompiler =
await xgr.GrammarCompiler.createGrammarCompiler(
Expand Down

0 comments on commit e98369f

Please sign in to comment.