Nathan Michlo*, Richard Klein*, Steven James*
Affiliations: Β * Β RAIL Lab Β & Β Wits University
Learning disentangled representations with variational autoencoders (VAEs) is often attributed to the regularisation component of the loss. In this work, we highlight the interaction between data and the reconstruction term of the loss as the main contributor to disentanglement in VAEs. We show that standard benchmark datasets have unintended correlations between their subjective ground-truth factors and perceived axes in the data according to typical VAE reconstruction losses. Our work exploits this relationship to provide a theory for what constitutes an adversarial dataset under a given reconstruction loss. We verify this by constructing an example dataset that prevents disentanglement in state-of-the-art frameworks while maintaining human-intuitive ground-truth factors. Finally, we re-enable disentanglement by designing an example reconstruction loss that is once again able to perceive the ground-truth factors. Our findings demonstrate the subjective nature of disentanglement and the importance of considering the interaction between the ground-truth factors, data and notably, the reconstruction loss, which is under-recognised in the literature.
This repo contains additional conference resources:
- π Paper Β (β Extended Paper on arXiv)
- πΌ Poster
- π§βπ« Presentation
Code and experiments are extensions to my π§ͺ MSc. Research
- VAE frameworks and experiments are run using my π§Ά disent framework.
Computations were performed using the High Performance Computing infrastructure provided by the Mathematical Sciences Support unit at the University of the Witwatersrand.