Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed step used to log SAC summary #1008

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/misc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Bug Fixes:
- Fixed a bug in CloudPickleWrapper's (used by VecEnvs) ``__setstate___`` where loading was incorrectly using ``pickle.loads`` (@shwang).
- Fixed a bug in ``SAC`` and ``TD3`` where the log timesteps was not correct(@YangRui2015)
- Fixed a bug where the environment was reset twice when using ``evaluate_policy``

- Fixed a bug where ``SAC`` uses wrong step to log to tensorboard after multiple calls to ``SAC.learn(..., reset_num_timesteps=True)``
krishpop marked this conversation as resolved.
Show resolved Hide resolved

Deprecations:
^^^^^^^^^^^^^
Expand Down
3 changes: 2 additions & 1 deletion stable_baselines/sac/sac.py
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,7 @@ def setup_model(self):
self.summary = tf.summary.merge_all()

def _train_step(self, step, writer, learning_rate):
del step
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if step is not used anymore, why not removing it from the arguments?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point! I will do that instead, wasn't sure if it followed a train_step template used by other algorithms as well, but it appears that is not the case

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed step, and fixed issue of wrapping replay_buffer on each call to learn (now only happens on the first one). Although let me know if you think it is better to check if self.model.replay_buffer is wrapped instead using isinstance(self.model.replay_buffer, HindsightExperienceReplayWrapper)

# Sample a batch from the replay buffer
batch = self.replay_buffer.sample(self.batch_size, env=self._vec_normalize_env)
batch_obs, batch_actions, batch_rewards, batch_next_obs, batch_dones = batch
Expand All @@ -336,7 +337,7 @@ def _train_step(self, step, writer, learning_rate):
if writer is not None:
out = self.sess.run([self.summary] + self.step_ops, feed_dict)
summary = out.pop(0)
writer.add_summary(summary, step)
writer.add_summary(summary, self.num_timesteps)
else:
out = self.sess.run(self.step_ops, feed_dict)

Expand Down