-
Notifications
You must be signed in to change notification settings - Fork 2
Generation of Videos
Deval Srivastava edited this page Jul 12, 2018
·
2 revisions
- For the purpose of generation of videos we have used MoCoGAN which is motion decomposition GAN.
- MoCoGAN consists of 3 networks the first is the recurrent neural network which can capture information between frames and is used to generate latent variables corresponding to each frame.
- Each of these latent variables is input to the second image generator network, this network uses convolutions to upsample and create an image. All the images can be joined together to create a video.
- The video Discriminator can be enforced to check for a specific category of the video using this information videos of different categories can be generated
- You will find the code for this task in the videogeneration folder
- Run the preprocess.py script which will download the dataset and then preprocess it as necessary.
Python3 preprocess.py
- Currently the code has been prepared so that its able to preprocess kth dataset
- Go to src/run.sh , in this shell file you can set the neccessary hyper parameters
./run.sh
- You would be able to see the training output on the terminal and if neccessary you can view the preliminary results on tensorflow as
tensorboard --logdir logs
- To view the Training results and test the network open the demo-video.ipynb file and run it.
- Set the generator epoch in the demo-video.ipynb