-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert exported model to tensorflow lite? #136
Comments
I have some more information on this issue now.
and I got the following output:
I'm trying to run the conversion on Windows 10 using tf-nightly. |
The Python API for the converter doesn't work with frozen graphs from TF 1.0, and I guess it might be the case with You could also expand the |
@emedvedev Alas I can't use the saved_model since my requirments are that the model must run on the new Edge TPU. And the TPU does not support dynamic sizes. (https://coral.withgoogle.com/docs/edgetpu/models-intro/) |
Not entirely sure how a SavedModel would be different from the frozen graph of the same model for conversion purposes, so can’t advise there.
If you’re sure that SavedModel is out, it seems like your best bet would be extending the code in the `export` method and use the Python API for conversion.
…On Jun 17, 2019, 17:19 +0700, Luukjn ***@***.***>, wrote:
@emedvedev Alas I can't use the saved_model since my requirments are that the model must run on the new Edge TPU. And the TPU does not support dynamic sizes. (https://coral.withgoogle.com/docs/edgetpu/models-intro/)
I don't really know what would be required to create a model that meets the requirements for the TPU.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Sorry for the delay, but I think I've got it. I used MutableHashTable for the charmap lookup, and it's not supported by TF Lite. However, a simple HashTable can be used instead, so if you change L159-L169 in model.py to:
And remove the |
I'm also trying to convert the model to tf.lite and encounter the same problems. After changing L159-L169 and L171, I get the following error:
Any recommendations? |
Could be something with the HashTable syntax. My recommendation was off the top of my head, so not guaranteed, but should still be pretty close :)
Is it just the export that doesn’t work with the changes? Does training work? If it doesn’t, then it’s definitely the HashTable syntax, and should be straightforward to fix through some debugging and going through the docs.
…On Jul 9, 2019, 15:09 +0300, B.Sc. Christopher Baumgärtner ***@***.***>, wrote:
I'm also trying to convert the model to tf.lite and encounter the same problems. After changing L159-L169 and L171, I get the following error:
Traceback (most recent call last):
File "/home/chris/tf/tf/bin/aocr", line 10, in <module>
sys.exit(main())
File "/home/chris/tf/tf/lib/python3.6/site-packages/aocr/__main__.py", line 252, in main
channels=parameters.channels,
File "/home/chris/tf/tf/lib/python3.6/site-packages/aocr/model/model.py", line 166, in __init__
-1
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/contrib/lookup/lookup_ops.py", line 332, in __init__
super(HashTable, self).__init__(default_value, initializer)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/ops/lookup_ops.py", line 167, in __init__
default_value, dtype=self._value_dtype)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1087, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1145, in convert_to_tensor_v2
as_ref=False)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 305, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 246, in constant
allow_broadcast=True)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 466, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 371, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected string, got -1 of type 'int' instead.
Any recommendations?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Hi there, I am trying to do the same on a Mac. I have been able to export in tflite mobilenet V1 but I am not able to do the same with this model even if I do it with the saved_model. do you have an update on the hashtable issue ? Actually by changing the lines below, the training doesn't work anymore. My code
Then I get this error
|
Any update? |
Same problem here. Any update to this? |
Hello,
I'm trying to run the exported model on a mobile device. After some research I found Tensorflow Lite.
Is there a way to convert the exported model to Tensorflow Lite? If there is, how would I go about doing just that?
The text was updated successfully, but these errors were encountered: