Neural Text Generation refers to a kind of methods that mainly attempt to use NN as function approximators to mimic the underlying distribution of (natural) languages. The most important applications of the conditional version of this topic include Neural Machine Translation (NMT), neural image captioning and dialogue system (chatbot). However, the researches of NTG usually refer to those focus on the unconditional problem, that is to really learn the latent distribution of the target language (instead of a transformation mapping from source form to target form).
This repository presents a collection of previous research papers of Neural Text Generation (NTG), as well as a taxonomy constructed according to publication time, method paradigm or paper type.
- Neural Text Generation: Past, Present and Beyond
- How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?
- BLEU: a method for automatic evaluation of machine translation
- METEOR: An automatic metric for MT evaluation with improved correlation with human judgments
- Perplexity—a measure of the difficulty of speech recognition tasks
- NLLoracle, NLLtest, SelfBLEU, Texygen Texygen: A Benchmarking Platform for Text Generation Models
- Architechture
- NNLM A neural probabilistic language model
- RNNLM Recurrent neural network based language model
- LSTM Long short-term memory
- GRU Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
- SRU Training RNNs as Fast as CNNs
- Hierarchical Softmax Classes for fast maximum entropy training
- Feudal-like Language Model Long Text Generation via Adversarial Training with Leaked Information
- Training Algorithm / Models
- Likelihood Based
- Maximum Likelihood Estimation / Teacher Forcing A learning algorithm for continually running fully recurrent neural networks
- Scheduled Sampling Scheduled sampling for sequence prediction with recurrent neural networks
- Static-target Reinforcement Learning
- Policy Gradient Policy gradient methods for reinforcement learning with function approximation
- PG-BLEU: Use Policy Gradient to optimize BLEU.
- Policy Gradient Policy gradient methods for reinforcement learning with function approximation
- Adversarial Methods
- Adversary as a Regularization
- Professor Forcing Professor forcing: A new algorithm for training recurrent networks
- Direct
- SeqGAN SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
- MaliGAN Maximum-likelihood augmented discrete generative adversarial networks
- RankGAN Adversarial ranking for language generation
- ScratchGAN - Big Version of SeqGAN gets rid of pre-training! Training Language GANs from Scratch
- RelGAN RelGAN: Relational Generative Adversarial Networks for Text Generation
- Adversarial Feature Matching
- Denoise Sequence-to-sequence Learning
- Reparametrized Sampling
- Learning to Exploit Leaked Information
- Smoothing-N-Rediscretization
- WGAN-GP, GAN-GP Adversarial generation of natural language
- Adversary as a Regularization
- Cooperative Methods
- Likelihood Based