-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to increase compression efficiency? #7
Comments
Length is indeed one of the reasons. The other reason is that the software implementation builds a dynamic tree which adapts to the input data. As suggested in the README, the hardware implementation is best for fast decompression. For optimal compression I suggest using software on eg a soft cpu like https://github.com/SpinalHDL/VexRiscv or any other implementation. |
Thanks for the prompt reply. Would you please explain more detailed about software implementation by using dynamic tree? If I increase matching bytes from MATCH10 to 15, will that help compression efficiency? If the answer is yes, what other parameters I should modified, except MATCH10 block? |
This hardware implementation does not implement a Huffman dynamic tree. Increasing match to 15 will increase the lut count a lot and in most cases will not gain much compression. With the software implementation I meant the zlib library running on a cpu core. |
There is dynamic function in this project. What dose this dynamic function do? Is this function only for decompression? Are out_codes value and stat_leaves value defined by user? Would you please explain how these two values are obtained? |
Hi,
I found that python zlib compression efficiency is better than this hdl project in your test file.
Is the main reason matching length different between python zlib and hdl deflate?
If I would like to increase the compression efficiency, like python zlib, what should I do?
Thanks.
The text was updated successfully, but these errors were encountered: