Skip to content

An unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch

License

Notifications You must be signed in to change notification settings

manuelknott/enhancing-transformers

 
 

Repository files navigation

Table of Contents
  1. About The Project
  2. Getting Started
  3. Roadmap
  4. Contributing
  5. License
  6. Contact
  7. Acknowledgments

News

09/09

  1. The release weight of ViT-VQGAN small which is trained on ImageNet at here

16/08

  1. First release weight of ViT-VQGAN base which is trained on ImageNet at here
  2. Add an colab notebook at here

About The Project

This is an unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch. ViT-VQGAN is a simple ViT-based Vector Quantized AutoEncoder while RQ-VAE introduces a new residual quantization scheme. Further details can be viewed in the papers

Getting Started

For the ease of installation, you should use anaconda to setup this repo.

Installation

A suitable conda environment named enhancing can be created and activated with:

conda env create -f environment.yaml
conda activate enhancing

Training

Training is easy with one line: python3 main.py -c config_name -lr learning_rate -e epoch_nums

Roadmap

  • Add ViT-VQGAN
    • Add ViT-based encoder and decoder
    • Add factorized codes
    • Add l2-normalized codes
    • Replace PatchGAN discriminator with StyleGAN one
  • Add RQ-VAE
    • Add Residual Quantizer
    • Add RQ-Transformer
  • Add dataloader for some common dataset
    • ImageNet
    • LSUN
    • COCO
      • Add COCO Segmentation
      • Add COCO Caption
    • CC3M
  • Add pretrained models
    • ViT-VQGAN small
    • ViT-VQGAN base
    • ViT-VQGAN large

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Thuan H. Nguyen - @leejohnthuan - [email protected]

Acknowledgments

This project would not be possible without the generous sponsorship from Stability AI and helpful discussion of folks in LAION discord

This repo is heavily inspired by following repos and papers:

About

An unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.0%
  • Cuda 8.2%
  • C++ 1.5%
  • Dockerfile 0.3%