Skip to content

PyTorch Implementation of "SPI-GAN: Towards Single-Pixel Imaging through Generative Adversarial Network"

License

Notifications You must be signed in to change notification settings

nazmul-karim170/SPI-GAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

If you like our project, please give us a star ⭐ on GitHub for the latest update.

arXiv License: MIT

What is a Single-Pixel Camera?

😮 Highlights

We design a novel DL-based reconstruction framework to tackle the problem of high-quality and fast image recovery in single-pixel imaging

💡 Fast, High-quality Image and Video Reconstruction

  • Deep Learning for Reconstruction instead of traditional L1-norm solution --> Fast Reconstruction
  • Generative Adversarial Network (GAN) as the recovery architecture --> High-quality
  • In addition to adversarial and MSE loss, we use a perceptual loss function using the feature space of a pre-trained ImageNet Encoder --> Helps to achieve SOTA performance

🚩 Updates

Welcome to watch 👀 this repository for the latest updates.

[2023.12.18] : We have released our code!

[2021.07.21] : We have released our paper, SPI-GAN on arXiv.

🛠️ Methodology

Proposed Framework

Summary_figure Our proposed SPI-GAN framework mainly consists of a generator that takes the noisy l2-norm solution (xˆ_noisy) and produce a clear reconstruction (xˆ) that is comparable to x. On the other hand, a discriminator learns to differentiate between x and xˆ in an attempt to not to be fooled by the generator.

Architecture

Installation Guide

  • Install Anaconda and create an environment

     conda create -n spi_gan python=3.10
     conda activate spi_gan
  • After creating a virtual environment, run

     pip install -r requirements.txt

Code for Training

  • First download the STL10 and UCF101 datasets. You can find both of these datasets very easily.

  • If you Want to Create the images that will be fed to the GAN, Run Matlab code "L2Norm_Solution.m" for generating the l2-norm solution. Make Necessary Folders before run. I will also upload the python version of this in future.

  • Execute this to create the .npy file under different settings

        python save_numpy.py
  • For Training-

        python Main_Reconstruction.py

Data Preaparation for Video Reconstruction: UCF-101

  • Download videos and train/test splits here.

  • Convert from avi to jpg files using util_scripts/generate_video_jpgs.py

     python -m util_scripts.generate_video_jpgs avi_video_dir_path jpg_video_dir_path ucf101
  • Generate annotation file in json format similar to ActivityNet using util_scripts/ucf101_json.py

    • annotation_dir_path includes classInd.txt, trainlist0{1, 2, 3}.txt, testlist0{1, 2, 3}.txt

      python -m util_scripts.ucf101_json annotation_dir_path jpg_video_dir_path dst_json_path

🚀 Reconstruction Results

Qualitative comparison

Generalization to Unseen Datasets

Quantitative comparison

Quantitative evaluation of SPI-GAN shown by average PSNR over 2000 test images.

✏️ Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and a citation 📝.

@misc{karim2021spigan,
      title={SPI-GAN: Towards Single-Pixel Imaging through Generative Adversarial Network}, 
      author={Nazmul Karim and Nazanin Rahnavard},
      year={2021},
      eprint={2107.01330},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}	

About

PyTorch Implementation of "SPI-GAN: Towards Single-Pixel Imaging through Generative Adversarial Network"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published