You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not 100% certain, but it seems like these issues are misplaced and should be designated as model-optimization issues.
There seems to be a workaround via using tf.nn.conv2d instead of tf.keras.layers.Conv2D, but as far as I can tell this would require layer subclassing which, based on other issues, is still buggy when it comes to quantization.
Code to reproduce the issue
See afformentioned issues.
The text was updated successfully, but these errors were encountered:
Describe the bug
Tensorflow model optimization fails to quantize dilated convolution layers.
System information
TensorFlow version (installed from source or binary): source
TensorFlow Model Optimization version (installed from source or binary): source
Python version: 3.10.12
Describe the expected behavior
Quantizing dilated convolutions should be essentially the same as any other layer.
Describe the current behavior
Either
tf
ortfmot
is silently failing. There is the following very old issue describing exactly this:tensorflow/tensorflow#26797
There is a slightly newer open issue showing that this was never resolved:
tensorflow/tensorflow#53025
I am not 100% certain, but it seems like these issues are misplaced and should be designated as model-optimization issues.
There seems to be a workaround via using
tf.nn.conv2d
instead oftf.keras.layers.Conv2D
, but as far as I can tell this would require layer subclassing which, based on other issues, is still buggy when it comes to quantization.Code to reproduce the issue
See afformentioned issues.
The text was updated successfully, but these errors were encountered: