Skip to content

Latest commit

 

History

History
35 lines (27 loc) · 1.08 KB

README.md

File metadata and controls

35 lines (27 loc) · 1.08 KB

PyTorch implementation of LBT

This is an official implementation of the paper "Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients", accepted to ECCV 2024.

For more information, checkout the project site [website].

Getting started

Dependencies

  • Python >= 3.6
  • PyTorch >= 1.8.0

Training & Evaluation

You can adjust the bit-widths of forward and backward passes in models/modules.py.

To start training, run:

 python train.py --config configs/resnet20_cifar100.yml

Citation

@inproceedings{kim2024toward,
    author={Kim, Dohyung  and Lee, Junghyup and Jeon, Jeimin and Moon, Jaehyeon and Ham, Bumsub},
    title={Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients},
    booktitle={European Conference on Computer Vision},
    year={2024},
}

Credit