Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError on README example #6

Open
stefan-jansen opened this issue Aug 28, 2020 · 3 comments
Open

RuntimeError on README example #6

stefan-jansen opened this issue Aug 28, 2020 · 3 comments

Comments

@stefan-jansen
Copy link

stefan-jansen commented Aug 28, 2020

Thank you for sharing the model and the blog post, looks very interesting. Unfortunately, when running the example in a Python 3.7.5 virtual environment with installed requirements, I'm getting the following error:

echo "tisimptant too spll chck ths dcment." \
>     | python src/tokenize.py \
>     | fairseq-interactive model7m/ \
>     --path model7m/checkpoint_best.pt \
>     --source-lang fr --target-lang en --beam 10 \
>    | python src/format_fairseq_output.py
Traceback (most recent call last):
  File "/home/stefan/.pyenv/versions/xfspell/bin/fairseq-interactive", line 11, in <module>
    load_entry_point('fairseq==0.9.0', 'console_scripts', 'fairseq-interactive')()
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq_cli/interactive.py", line 190, in cli_main
    main(args)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq_cli/interactive.py", line 149, in main
    translations = task.inference_step(generator, models, sample)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/tasks/fairseq_task.py", line 265, in inference_step
    return generator.generate(models, sample, prefix_tokens=prefix_tokens)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/sequence_generator.py", line 113, in generate
    return self._generate(model, sample, **kwargs)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/sequence_generator.py", line 379, in _generate
    scores.view(bsz, beam_size, -1)[:, :, :step],
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/search.py", line 81, in step
    torch.div(self.indices_buf, vocab_size, out=self.beams_buf)
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.

Here's the result of pip freeze:

aspell-python-py3==1.15
cffi==1.14.2
Cython==0.29.21
editdistance==0.5.3
fairseq==0.9.0
future==0.18.2
numpy==1.19.1
portalocker==2.0.0
pycparser==2.20
regex==2020.7.14
sacrebleu==1.4.13
torch==1.6.0
tqdm==4.48.2

Any guidance appreciated!

@stefan-jansen stefan-jansen changed the title RuntimeError: Integer division of tensors using div or / is no longer supported RuntimeError on README example Aug 28, 2020
@aacs0130
Copy link

According to the issue in Fairseq,
you can update the code of search.py from 'torch.div(self.indices_buf, vocab_size, out=self.beams_buf)' to 'torch.floor_divide(self.indices_buf, vocab_size, out=self.beams_buf)'
facebookresearch/fairseq#2460

@Omarnabk
Copy link

I was able to run the code by upgrading fairseq to the latest version. 0.9 did not work for me.

@DavidSorge
Copy link

DavidSorge commented Mar 5, 2021

Similar issue for me:

$ echo "The book Tom and Jerry put on the yellow desk yesterday war about NLP." | python src/tokenize.py | fairseq-interactive model7m/ --path model7m/checkpoint_best.pt --source-lang fr --target-lang en | python src/format_fairseq_output.py
Traceback (most recent call last):
  File "/home/dsorge/anaconda3/envs/spellcheck/bin/fairseq-interactive", line 8, in <module>
    sys.exit(cli_main())
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq_cli/interactive.py", line 190, in cli_main
    main(args)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq_cli/interactive.py", line 149, in main
    translations = task.inference_step(generator, models, sample)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/tasks/fairseq_task.py", line 265, in inference_step
    return generator.generate(models, sample, prefix_tokens=prefix_tokens)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/sequence_generator.py", line 113, in generate
    return self._generate(model, sample, **kwargs)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/sequence_generator.py", line 376, in _generate
    cand_scores, cand_indices, cand_beams = self.search.step(
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/search.py", line 81, in step
    torch.div(self.indices_buf, vocab_size, out=self.beams_buf)
RuntimeError: result type Float can't be cast to the desired output type Long

Edit: The solution by @aacs0130 worked for me! Thanks for your help, Cecilia!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants