Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix weight precision format string #877

Merged
merged 2 commits into from
Oct 6, 2023

Conversation

vloncar
Copy link
Contributor

@vloncar vloncar commented Sep 28, 2023

Description

Observed in #874 but not limited to zero-integer quantizers, the generated HLS doesn't achieve bit-perfect matching with the QKeras model. This turns out to be due to the initialization of the weights from literals. We print literals with {:#f} (where # is the number of digits required), but this doesn't work correctly and we should instead use {:.#f} (with the leading .). For example:

Python 3.8.5 (default, Sep  4 2020, 07:30:14) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> x = 0.3203125
>>> '{:7f}'.format(x)
'0.320312'
>>> '{:.7f}'.format(x)
'0.3203125'
>>>

ap_fixed<8,1> would interpret the literal 0.320312 as a value 0.3125, messing up the result. This one-liner (in fact one-character) fixes #874 (and alike).

Type of change

  • Bug fix (non-breaking change that fixes an issue)

Tests

Existing QKeras tests should suffice, we should eventually aim for a stricter agreement, ideally bit-perfect one, but that's a separate change.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@vloncar vloncar added the please test Trigger testing by creating local PR branch label Sep 28, 2023
@lgray
Copy link

lgray commented Sep 28, 2023

Just tried it for my model with quantized_bits(N, 0, alpha=1) everywhere and it looks good. Thanks!

@lgray
Copy link

lgray commented Sep 28, 2023

Though for some reason I need wider widths than expected to achieve closure, as described in slack chat.

@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Oct 6, 2023
@jmitrevs jmitrevs merged commit 147047e into fastmachinelearning:main Oct 6, 2023
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

QDense (and other layers) with quantized_bits(8, 0, alpha=1) weights never achieve agreement?
4 participants