We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am getting this error while running the code given in file "per_duelingq_spaceinv_tf2.py" on Google Colab that uses the Tensorflow version: 2.3.0
ValueError Traceback (most recent call last) <ipython-input-8-000aec5df542> in <module>() 29 30 if steps > DELAY_TRAINING: ---> 31 loss = train(primary_network, memory, target_network) 32 update_network(primary_network, target_network) 33 _, error = get_per_error(tf.reshape(old_state_stack, (1, POST_PROCESS_IMAGE_SIZE[0], 1 frames <ipython-input-5-920194395f77> in get_per_error(states, actions, rewards, next_states, terminal, primary_network, target_network) 10 # the q value for the prim_action_tp1 from the target network 11 q_from_target = target_network(next_states) ---> 12 updates = rewards + (1 - terminal) * GAMMA * q_from_target.numpy()[:, prim_action_tp1] 13 target_q[:, actions] = updates 14 # calculate the loss / error to update priorites ValueError: operands could not be broadcast together with shapes (31,) (32,32)
Any suggestion will be useful.
Thanks & Regards, Swagat
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I am getting this error while running the code given in file "per_duelingq_spaceinv_tf2.py" on Google Colab that uses the Tensorflow version: 2.3.0
Any suggestion will be useful.
Thanks & Regards,
Swagat
The text was updated successfully, but these errors were encountered: