You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I pretrained a language model on English-only corpus, using BPE tokenization with vocab_size=32000.
I want to continue training the model on Japanese corpus.
Since the tokenizer is unable to handle Japanese text, I'm wondering if it's possible to extend the original BPE tokenizer trained on English corpus to tokenize Japanese. So here is my idea.
Train another BPE model on Japanese corpus with vocab_size=32000.
Then merge the two BPE models as a new model and keep the tokenization on English unchanged so that English sentences tokenization are kept the same as before.
The resulting vocab_size should be roughly 64000, in case there are some duplicates between English and Japanese vocablaries.
I'm not sure whether it's possible to merge the two BPE models as a new model and keep the tokenization on English unchanged. Any help would be appreciated!
The text was updated successfully, but these errors were encountered:
technically, you can just concatenate the two BPE files (called codes_file in the README), and this should achieve your desired result. I've done this back in 2015 to combine Cyrillic and Latin merge operations for Russian. Two things to pay attention to:
the first line of the file gives some version info. You can remove this from the 2nd file that you concatenate to the first.
the order of the files matters, since you will get different segmentations depending on the order of merge operations.
if there's Latin alphabet text in the Japanese file, there is a chance that the English tokenization changes in rare cases. To prevent this, you'd have to only use the first 32000 merge operations for English text.
technically, you can just concatenate the two BPE files (called codes_file in the README), and this should achieve your desired result. I've done this back in 2015 to combine Cyrillic and Latin merge operations for Russian. Two things to pay attention to:
the first line of the file gives some version info. You can remove this from the 2nd file that you concatenate to the first.
the order of the files matters, since you will get different segmentations depending on the order of merge operations.
if there's Latin alphabet text in the Japanese file, there is a chance that the English tokenization changes in rare cases. To prevent this, you'd have to only use the first 32000 merge operations for English text.
Hi, here is the case.
Since the tokenizer is unable to handle Japanese text, I'm wondering if it's possible to extend the original BPE tokenizer trained on English corpus to tokenize Japanese. So here is my idea.
I'm not sure whether it's possible to merge the two BPE models as a new model and keep the tokenization on English unchanged. Any help would be appreciated!
The text was updated successfully, but these errors were encountered: