-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time taken for multi_round_infer.sh to run? #15
Comments
How big in your test dataset? |
Hi, I am facing the same problem. Here are my (unsuccessful) attempts:
INFO:tensorflow:Restoring parameters from PIE_ckpt/pie_model.ckpt
INFO:tensorflow:Error recorded from prediction_loop: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Op type not registered 'MapAndBatchDatasetV2' in binary running on n-dfbb99ed-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
INFO:tensorflow:prediction_loop marked as finished
WARNING:tensorflow:Reraising captured error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
self._extend_graph()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'MapAndBatchDatasetV2' in binary running on n-dfbb99ed-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. Would you mind dropping some hints how to achieve the decoding speed reported in the paper? |
@melisa-qordoba please give more details on how you exported and then imported the estimator. What was the batch size when you used the exported estimator, still 1? if yes, try with more and see if there is any improvement. As the paper noted, this was made for accuracy rather than speed. |
Iam running the scripts on gitpod and it's taking a long time? May I know how much time it usually takes?
The text was updated successfully, but these errors were encountered: