Supporting code to the paper
O Sidorov. Conditional GANs for Multi-Illuminant Color Constancy: Revolution or Yet Another Approach?
The work presents an extension of the supervised image-to-image translation algorithm "pix2pix" by Isola et al. orriented specifically to the color constancy task.
AngularGAN inherits from this implementation of pix2pix in PyTorch. Therefore, you may follow original instruction for installation and dependincies. The new modules are implemented in Torch and do not require additional packages.
- Put your data in datasets/facades in the format
-facedes/
-test/
-xxx.jpg
-yyy.jpg
-...
-train/
-zzz.jpg
-...
where each image consist of couple of images A and B (input and output) concatenated along horizontal axis.
- Run
visdom
to open training visualization (optional) - Run training (change parameter
--model angular_gan_v2
to use v2)
chmod a+x run.sh
./run.sh
- Replace
runtest.sh
for testing (change parameter--model angular_gan_v2
to use v2)
We thank autors of pix2pix for their excellent work!
The MATLAB code generate_tinted_images.m
allows to apply multi-illuimnant color cast to the input images. The tint maps are randomized and are not coherent between frames.
You can use the provided file real_illum_11346_Normalized.mat
or create your own by simple normalization of the original illumination vectors as e_norm = e./norm(e)
.
The GTAV Shadow Removal Dataset of 5,723 image pairs with and without shadows may be acessed by the link.