Skip to content

Eric-qi/RDO-PTQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression

Overview

This repository contains the simple implementation of PTQ for LIC.

Our goal is to provide some materials and data that are easy to use for further study. Some errors are inevitable.

1. Literature Comparison

We researched most of works on quantization of LIC as we can. These works containg:

  • [ICLR 2019] Ballé et al., 2019 : Interger Networks for Data Compression with Latent-Variable Models
  • [ICIP 2020] Sun et al., 2020 : End-to-End Learned Image Compression with Fixed Point Weight Quantization
  • [TCSVT 2020] Hong et al., 2020 : Efficient Neural Image Decoding via Fixed-Point Inference
  • [PCS 2021] Sun et al., 2021 : Learned Image Compression with Fixed-point Arithmetic
  • [Arxiv 2021] Sun et al., 2021* : End-to-End Learned Image Compression with Quantized Weights and Activations
  • [Arxiv 2022] He et al., 2022 : Post-Training Quantization for Cross-Platform Learned Image Compression
  • [PCS 2022] Koyuncu et al., 2022 : Device Interoperability for Learned Image Compression with Weights and Activations Quantization
  • [TCSVT 2022] Sun et al., 2022 : Q-LIC: Quantizing Learned Image Compression with Channel Splitting
  • [TCSVT 2023] Shi et al., 2023 : Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression
  • updating

Results of quantizing LIC in terms of BD-rate.

Methods Bit-Width (W/A) Granularity Type Models Kodak24
Ballé et al., 2019 None None QAT Ballé2018 None
Sun et al., 2020 8/32 channel-wise QAT Cheng2019 None
Hong et al., 2020 8/10 layer-wise QAT Ballé2018
Chen2021
26.50%
16.04%
Hong et al., 2020 8/16 layer-wise QAT Ballé2018
Chen2021
17.90%
3.25%
Sun et al., 2021 8/32 channel-wise QAT Cheng2019 None
Sun et al., 2021* 8/8 channel-wise QAT Cheng2019 None
He et al., 2022 8/8 layer-wise (W)
channel-wise(A)
PTQ Balle2018
Minnen2018
Cheng2020
None
0.66%
0.42%
Koyuncu et al., 2022 16/16 channel-wise PTQ TEAM14 0.29%
Sun et al., 2022 8/8 channel-wise PTQ Cheng2019
Cheng2020
4.98% & 4.34% (MS-SSIM)
10.50% & 4.40% (MS-SSIM)
Shi et al., 2022 8/8 channel-wise PTQ Minnen2018
Cheng2020
Lu2022
5.84%
4.88%
3.70%
Shi et al., 2022 10/10 channel-wise PTQ Minnen2018
Cheng2020
Lu2022
0.41%
0.43%
0.49%

The notion of above table mentioned:

  • PTQ : Posting Training Quantization
  • QAT : Quantization Aware Training

The models of above table mentioned:

  • [ICLR 2018] Ballé2018 : Variational Image Compression with a Scale Hyperprior
  • [NeurIPS 2018] Minnen2018 : Joint Autoregressive and Hierarchical Priors for Learned Image Compression
  • [CVPR Workshop 2019] Cheng2019 : Deep Residual Learning for Image Compression
  • [CVPR 2020] Cheng2020 : Learned Image Compression With Discretized Gaussian Mixture Likelihoods and Attention Modules
  • [TIP 2021] Chen2021: End-to-End Learnt Image Compression via Non-Local Attention Optimization and Improved Context Modeling
  • [JPEG AI CfP 2022] TEAM14 : Presentation of the Huawei response to the JPEG AI Call for Proposals: Device agnostic learnable image coding using primary component extraction and conditional coding
  • [Arxiv 2022] Lu2022 : High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation
  • [DCC 2022] Lu2022* : Transformer-based Image Compression

The Datasets of above table mentioned:

  • Kodak : with 24 image resolution at 768×512
  • Tecnick : with 100 image resolution at 1200 × 1200
  • CLIC : CLIC professional validation dataset contains 41 images at 2k spatial resolution approximately

Note:

  1. All of data are obtained from their avaliable published paper
  2. Only the primary data of the paper is displayed here, and the reproduced data by others is not included
  3. The None in table means that authors did these related experiments without showing the BD-rate

2. PTQ optimization

PTQ has attracted a lot of attention. More and more wroks try to push the limits of PTQ. Here we introduce some novel works of PTQ.

Task-oriented optimization

Recently, a lot of works have recognized that minimizing quantization error may not be optimal. We should pay more attention to the objects of tasks, e.g., the accuracy, the PSNR, the MS-SSIM. Therefore, these works push the limit of PTQ by minimizing task loss. We call this idea Task-oriented optimization. Here are some works about this idea. These works containg:

  • [PMLR 2020] AdaRound : Up or Down? Adaptive Rounding for Post-Training Quantization
  • [Arxiv 2020] AdaQuant : Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
  • [ML 2021] LAPQ : Loss aware post-training quantization`
  • [ICLR 2021] BRECQ : BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction
  • [ICLR 2022] QDrop : QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
  • [CVPR 2023] PD-Quant : PD-Quant: Post-Training Quantization Based on Prediction Difference Metric
  • [Arxiv 2023] AQuant : Efficient Adaptive Activation Rounding for Post-Training Quantization
  • [updating]

3. Usage

  • Environment
        pip install -r requirements.txt
  • light uniform PTQ

  • task-oriented PTQ

  • ...

Citation

If you use this project, please considering citing the relevant original publications for the models and datasets, and citing the paper as:

@ARTICLE{10274709,
	author={Shi, Junqi and Lu, Ming and Ma, Zhan},
	journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
	title={Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression}, 
	year={2023},
	volume={},
	number={},
	pages={1-1},
	doi={10.1109/TCSVT.2023.3323015}
	}

Acknowledgement

This framework is based on BRECQ, CompressAI, and TinyLIC.

We thank the authors for sharing their codes.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published