We implement a Deep Character-Level Neural Machine Translation based on Theano and Blocks. Please intall relative packages according to Blocks before testing our program. Note that, please use Python 3 instead of Python 2. There will be some problems with Python 2.
The architecture of DCNMT is shown in the following figure which is a single, large neural network.
If you want to train your own model, please prepare a parallel linguistics corpus, like corpus in WMT. A GPU with 12GB memory will be helpful. You could run bash train.sh
or follow these steps.
- Download the relative scripts (tokenizer.perl, multi-bleu.perl) and nonbreaking_prefix from mose_git.
- Download the datasets, then tokenize and shuffle the cropus.
- Create the character list for both language using
create_vocab.py
inpreprocess
folder. Don't forget to pass the language setting, vocabulary size and file name to this script. - Create a
data
folder, and put thevocab.*.*.pkl
and*.shuf
in thedata
folder. - Prepare the tokenized validation and test set, and put them in
data
folder. - Edit the
configurations.py
, and runpython training.py
. It will take 1 to 2 weeks to train a good model.
We have trained several models which listed in the following table. However, because of the limitation of available GPU and long training time (two weeks or more), we don't have enough time and resource to train on more language pairs. Would you like to help us to train on more language pairs? If you run into any trouble, please open an issue or email me directly at echo c3dvcmQueW9ya0BnbWFpbC5jb20K | base64 -d
. Thanks!
language pair | dataset | encoder_layers | transition_layers | BLEU |
---|---|---|---|---|
en-fr | same as RNNsearch | 1 | 1 | 30.46 |
en-fr | same as RNNsearch | 2 | 1 | 31.98 |
en-fr | same as RNNsearch | 2 | 2 | 32.12 |
en-cs | wmt15 | 1 | 1 | 16.43 |
These models are all trained for about 5 epochs, and evaluate on newstest2014
using the best validation model on newstest2013
. You can download these models from dropbox, then put them (dcnmt_*, data, configurations.py) in this directory. To perform testing, just run python testing.py
. It takes about an hour to do translation on 3000 sentences if you have a moderate GPU.
Please prepare a wordlist to calculate embedding, then just run python embedding.py
to view the results.
It is the special feature of DCNMT model. For example,
Source: Unlike in Canada, the American States are responisble for the orgainisation of federal elections in the United States.
Ref: Contrairement au Canada, les États américains sont responsables de l’organisation des élections fédérales aux États-Unis.
Google: Contrairement au Canada, les États américains sont responisble pour le orgainisation des élections fédérales aux États-Unis.
DCNMT: Contrairement au Canada, les États américains sont responsables de l’organisation des élections fédérales aux États-Unis.
The performance of misspelling correction would be analyzed later.
This program have been tested under the latest Theano and Blocks, it may fail to run because of different version. If you failed to run these scripts, please make sure that you can run the examples of Blocks.