Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how can I generate after training? #7

Open
daiwk opened this issue Aug 22, 2016 · 3 comments
Open

how can I generate after training? #7

daiwk opened this issue Aug 22, 2016 · 3 comments

Comments

@daiwk
Copy link

daiwk commented Aug 22, 2016

how can I use the checkpoint to generate images?
e.g. I wanna
`
saver.restore(sess, ckt_path)

        print("restored")
        x, _ = self.dataset.test.next_batch(self.batch_size)
        feed_dict = {self.input_tensor: x}
        sess.run(init)

        print("testing.....")
        z_var = self.model.latent_dist.sample_prior(self.batch_size)
        fake_x, _ = self.model.generate(z_var)
        fake_d, _, fake_reg_z_dist_info, _ = self.model.discriminate(fake_x)
        generator_loss = - tf.reduce_mean(tf.log(fake_d + TINY))

        sess.run(init)

        ##print(sess.run(z_var, feed_dict)) ## ok
        print(sess.run(fake_x, feed_dict))`

but it says
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value fc_batch_norm_8/batch_norm/fc_batch_norm_8/batch_norm/moments/normalize/mean/ExponentialMovingAverage
[[Node: fc_batch_norm_8/batch_norm/fc_batch_norm_8/batch_norm/moments/normalize/mean/ExponentialMovingAverage/read = IdentityT=DT_FLOAT, _class=["loc:@fc_batch_norm_8/batch_norm/fc_batch_norm_8/batch_norm/moments/normalize/mean/ExponentialMovingAverage"], _device="/job:localhost/replica:0/task:0/cpu:0"]]
Caused by op u'fc_batch_norm_8/batch_norm/fc_batch_norm_8/batch_norm/moments/normalize/mean/ExponentialMovingAverage/read', defined at:

@NHDaly
Copy link

NHDaly commented Oct 6, 2016

I'd love to hear the official answer to this question!

That said, I managed to get this to work by making this tiny change to infogan_trainer.py:
In 8fca21e, I changed:

fake_x, _ = self.model.generate(z_var)

to

self.fake_x, _ = self.model.generate(z_var)

so that the InfoGANTrainer holds a reference to the piece of the graph that generates fake data samples from the trained generator graph. This way, you can reuse the graph to generate data once it's trained.


To get access to the trained graph, I think you could probably do something similar to what you've done above by loading from a checkpoint, but instead in b4602c9 I also changed InfoGANTrainer.train() to accept a tf.Session() parameter, so that you can use it immediately without having to reload the weights from a checkpoint file.


So finally, for mnist, in bb8ad8e I've changed the last few lines to now look like this:

    sess = tf.Session()
    algo.train(sess)

    # Use the trained model to generate one batch of images!
    generated_images = sess.run(algo.fake_x)

    print("Saving generated images to " + gen_dir)

    for i,img in enumerate(generated_images):
        img_file = os.path.join(gen_dir, 'generated_%05d.png' % (i))
        scipy.misc.toimage(img.reshape(dataset.image_shape[:2]), cmin=0.0, cmax=1.0).save(img_file)

I hope this helps! 😸

@zdx3578
Copy link

zdx3578 commented Oct 9, 2016

can ref his code: https://github.com/RutgersHan/InfoGAN/

@wxdai
Copy link

wxdai commented Aug 2, 2017

@daiwk Have the same problem! Did you manage to solve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants