Releases: Trusted-AI/adversarial-robustness-toolbox
ART 0.9.0
This release contains breaking changes to attacks and defences with regards to setting attributes, removes restrictions on input shapes which enables the use of feature vectors and several bug fixes.
Added
- implement pickle for classifiers
tensorflow
andpytorch
(#39) - added example
data_augmentation.py
demonstrating the use of data generators
Changed
- renamed and moved tests (#58)
- change input shape restrictions, classifiers accept now any input shape, for example feature vectors; attacks requiring spatial inputs are raising exceptions (#49)
- clipping of data ranges becomes optional in classifiers which allows attacks to accept unbounded data ranges (#49)
- [Breaking changes] class attributes in attacks can no longer be changed with method
generate
, changing attributes is only possible with methods__init__
andset_params
- [Breaking changes] class attributes in defenses can no longer be changed with method
generate
, changing attributes is only possible with methods__call__
andset_params
- resolved inconsistency in PGD random_init with Madry's version
Removed
- deprecated static adversarial trainer
StaticAdversarialTrainer
Fixed
- Fixed bug in attack ZOO (#60)
ART 0.8.0
This release includes new evasion attacks, like ZOO, boundary attack and the adversarial patch, as well as the capacity to break non-differentiable defences.
Added
- ZOO black-box attack (class
ZooAttack
) - Decision boundary black-box attack (class
BoundaryAttack
) - Adversarial patch (class
AdversarialPatch
) - Function to estimate gradients in
Preprocessor
API, along with its implementation for all concrete instances.
This allows to break non-differentiable defences. - Attributes
apply_fit
andapply_predict
inPreprocessor
API that indicate if a defence should be used at training and/or test time - Classifiers are now capable of running a full backward pass through defences
save
function for TensorFlow models- New notebook with usage example for the adversarial patch
- New notebook showing how to synthesize an adversarially robust architecture (see ICLR SafeML Workshop 2019: Evolutionary Search for Adversarially Robust Neural Network by M. Sinn, M. Wistuba, B. Buesser, M.-I. Nicolae, M.N. Tran)
Changed
- [Breaking change] Defences in classifiers are now to be specified as
Preprocessor
instances instead of strings - [Breaking change] Parameter
random_init
inFastGradientMethod
,ProjectedGradientDescent
andBasicIterativeMethod
has been renamed tonum_random_init
and allows now to specify the number of random initialization to run before choosing the best attack - Possibility to specify batch size when calling
get_activations
fromClassifier
API
ART 0.7.0
This release contains a new poison removal method, as well as some restructuring of features recently added to the library.
Added
- Poisoning fixing method performing retraining as part of the
ActivationDefence
class - Example script of how to use the poison removal method
- New module
wrappers
containing features that alter the behaviour of aClassifier
. These are to be used as wrappers for classifiers and to be passed directly to evasion attack instances.
Changed
ExpectationOverTransformations
has been moved to thewrappers
moduleQueryEfficientBBGradientEstimation
has been moved to thewrappers
module
Removed
- Attacks no longer take an
expectation
parameter (breaking). This has been replaced by a direct call to the attack with anExpectationOverTransformation
instance.
Fixed
- Bug in spatial transformations attack: when attack does not succeed, original samples are returned now (issue #40, fixed in #42, #43)
- Bug in Keras with loss functions that do not take labels in one-hot encoding (issue #41)
- Bug fix in activation defence against poisoning: incorrect test condition
- Bug fix in DeepFool: inverted stop condition when working with batches
- Import problem in
utils.py
: top level imports were forcing users to install all supported ML frameworks
ART 0.6.0
Added
- PixelDefend defense
- Query-efficient black-box gradient estimates (NES)
- A general wrapper for classifiers allowing to change their behaviour (see
art/classifiers/wrapper.py
) - 3D plot in visualization
- Saver for
PyTorchClassifier
- Pickling for
KerasClassifier
- Representation for all classifiers
Changed
- We now use pretrained models for unit tests (see
art/utils.py
, functionsget_classifier_pt
,get_classifier_kr
,get_classifier_tf
) - Keras models now accept any loss function
Removed
Detector
abstract class. Detectors now directly extendClassifier
Thanking also our external contributors!
@AkashGanesan
ART 0.5.0
This release of ART adds two new evasion attacks, provides some bug fixes, as well as some new features, like access to the learning phase (training/test) through the Classifier
API, batching in evasion attacks and expectation over transformations.
Added
- Spatial transformations evasion attack (class
art.attacks.SpatialTransformations
) - Elastic net (EAD) evasion attack (class
art.attacks.ElasticNet
) - Data generator support for multiple types of TensorFlow iterators
- New function and property to the Classifier API that allow to explicitly control the learning phase (train/test)
- Reports for poisoning module
- Most evasion attacks now support batching, this is specified by the new parameter
batch_size
ExpectationOverTransformations
class, to be used with evasion attacks- Parameter
expectation
of evasion attacks allows to specify the use of expectation over transformations
Changed
- Update list of attacks supported by universarl perturbation
- PyLint and Travis configs
Fixed
- Indexing error in C&W L_2 attack (issue #29)
- Universal perturbation stop condition: attack was always stopping after one iteration
- Error with data subsampling in
AdversarialTrainer
when the ratio of adversarial samples is 1
ART 0.4.0
Added
- Class
art.classifiers.EnsembleClassifier
: support for ensembles underClassifier
interface - Module
art.data_generators
: data feeders for dynamic loading and augmentation for all frameworks - New function
fit_generator
to classifiers and adversarial trainer - C&W L_inf attack
- Class
art.defences.JpegCompression
: JPEG compression as preprocessing defence - Class
art.defences.ThermometerEncoding
: thermometer encoding as preprocessing defence - Class
art.defences.TotalVarMin
: total variance minimization as preprocessing defence - Function
art.utils.master_seed
: setting master seed for random number generators pylint
for Travis
Changed
- Restructure analyzers from poisoning module
Fixed
- PyTorch classifier support on GPU
ART 0.3.0
This release brings many new features to ART, including a poisoning module, an adversarial sample detection module and support for MXNet models.
Added
- Access to layers and model activations through the
Classifier
API - MXNet support
- Poison detection module, containing the poisoning detection method based on clustering activations
- Jupyter notebook with poisoning attack and detection example on MNIST
- Adversarial samples detection module, containing two detectors: one working based on inputs and one based on activations
Changed
- Optimized JSMA attack (
art.attacks.SaliencyMapMethod
) - can now run on ImageNet data - Optimized C&W attack (
art.attacks.CarliniL2Method
) - Improved adversarial trainer, now covering a wide range of setups
Removed
- Hard-coded
config
folder. Config now gets created on the fly when running ART for the first time. Produced config gets stored in home folder~/.art
ART 0.2.0
This release makes ART framework-independent. The following backends are now supported: TensorFlow, Keras and PyTorch.
Added
- New framework-independent
Classifier
interface - Backend support for TensorFlow, Keras and PyTorch
- Basic interface for detecting adversarial samples (no concrete method implemented for now)
- Gaussian augmentation
Changed
- All attacks now fit the new
Classifier
interface
Fixed
to_categorical
utility function for unsqueezed labels- Norms in CLEVER score
- Source code folder name to correct PyPI install
Removed
- hard-coded architectures for datasets / model types: CNN, ResNet, MLP
ART 0.1.0
This is the initial release of ART. The following features are currently supported:
Classifier
interface, supporting a few predefined architectures (CNN, ResNet, MLP) for standard datasets (MNIST, CIFAR10), as well as custom models from usersAttack
interface, supporting a few evasion attacks- FGM & FSGM
- Jacobian saliency map attack
- Carlini & Wagner L_2 attack
- DeepFool
- NewtonFool
- Virtual adversarial method (to be used for virtual adversarial training)
- Universal perturbation
- Defences
- Preprocessing interface, currently implemented by feature squeezing, label smoothing, spatial smoothing
- Adversarial training
- Metrics for measuring robustness: empirical robustness (minimal perturbation), loss sensitivity and CLEVER score
- Utilities for loading datasets, some preprocessing, common maths manipulations
- Scripts for launching some basic pipelines for training, tests and attacking
- Unit tests