Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed quantization of dilated convolution layers: tensorflow or tensorflow-model-optimization bug? #1130

Open
Ebanflo42 opened this issue May 13, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Ebanflo42
Copy link

Describe the bug
Tensorflow model optimization fails to quantize dilated convolution layers.

System information

TensorFlow version (installed from source or binary): source

TensorFlow Model Optimization version (installed from source or binary): source

Python version: 3.10.12

Describe the expected behavior

Quantizing dilated convolutions should be essentially the same as any other layer.

Describe the current behavior

Either tf or tfmot is silently failing. There is the following very old issue describing exactly this:

tensorflow/tensorflow#26797

There is a slightly newer open issue showing that this was never resolved:

tensorflow/tensorflow#53025

I am not 100% certain, but it seems like these issues are misplaced and should be designated as model-optimization issues.

There seems to be a workaround via using tf.nn.conv2d instead of tf.keras.layers.Conv2D, but as far as I can tell this would require layer subclassing which, based on other issues, is still buggy when it comes to quantization.

Code to reproduce the issue
See afformentioned issues.

@Ebanflo42 Ebanflo42 added the bug Something isn't working label May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant