Skip to content

Latest commit

 

History

History
57 lines (34 loc) · 2.8 KB

README.md

File metadata and controls

57 lines (34 loc) · 2.8 KB

Agglomerator 🧠🗜

Official code for "Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks" (N. Garau, N. Bisagno, Z. Sambugaro, N. Conci (Accepted for CVPR'22) [pdf]

Teaser

Setup

From the main directory run:

pipenv install

to install all the required dependencies.

Training/validation/testing

The code comes with separate configuration files for each dataset, with multiple flags to run training, validation and testing.

As an example, to run contrastive pre-training on CIFAR-10 on a single GPU, execute:

CUDA_VISIBLE_DEVICES=0 python src/main.py --flagfile config/config_CIFAR10.cfg

After running the pre-training you can run the training phase with:

CUDA_VISIBLE_DEVICES=0 python src/main.py --flagfile config/config_CIFAR10.cfg --resume_training --supervise --load_checkpoint_dir <path_to_checkpoint.ckpt>

To run testing or to freeze the network weights, set the 'mode' flag (e.g. --mode test or --mode freeze). Refer to this page for additional info about each flag.

Training

Pre-trained models

We provide pre-trained models that can be used to plot islands of agreement or fine-tune for image classification. To fine-tune a pretrained model, just run:

CUDA_VISIBLE_DEVICES=0 python src/main.py --flagfile config/config_CIFAR10.cfg --patch_size 1 --patch_dim 128 --resume_training --supervise --load_checkpoint_dir path_to_pretrained_model.ckpt

Plotting islands of agreement

To enable live visualization of the islands of the agreement during training/val/test, set the flag --plot_islands.

Islands

Citing

@article{garau2022interpretable,
    title={Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks},
    author={Garau, Nicola and Bisagno, Niccol{\`o} and Sambugaro, Zeno and Conci, Nicola},
    journal={arXiv preprint arXiv:2203.03282},
    year={2022}
}

Credits