This repository contains the PyTorch implementation of the paper "Generalizability of Adversarial Robustness Under Distribution Shifts" published at the Transactions on Machine Learning Research (TMLR) journal (with a Featured Certification award). The paper investigates the interplay between adversarial robustness and domain generalization, and shows that both empirical and certified robustness generalize to unseen domains, even in a real-world medical application.
The code requires the following packages:
- AutoAttack (pip install git+https://github.com/fra31/auto-attack)
To train the models and evaluate the generalization of empirical robustness, run the following command:
python -m domainbed.scripts.train_empirical --data_dir ./datasets/ --dataset PACS --algorithm ERM --test_env 0 --steps 300 --output_dir ./logs/
This would load the data from ./datasets/PACS/
and does standard ERM training, with environment 0 being the test environment and trains for 300 iterations/steps, the results will be saved in ./logs/ where you will find the best model checkpoint along with clean and robust accuracy (PGD and AutoAttack).
algorithm could be any of those: 'ERM', 'PGDLinf', 'TradesLinf', 'PGDL2', 'TradesL2'.
The default parameters for the adversarial training are:
eps = 2 / 255
step = eps / 4
num_steps = 10
beta = 3.0
which could be replaced in ./domainbed/algorithms.py
To train the smoothed models and evaluate the generalization of certified robustness, we follow the implementation in the DeformRS repository.
If you use this code or the results in your research, please cite the following paper:
@article{alhamoud2023generalizability,
title={Generalizability of Adversarial Robustness Under Distribution Shifts},
author={Alhamoud, Kumail and Hammoud, Hasan Abed Al Kader and Alfarra, Motasem and Ghanem, Bernard},
journal={Transactions on Machine Learning Research},
year={2023}
}
- Kumail Alhamoud: [email protected]
- Hasan Abed Al Kader Hammoud: [email protected]