Skip to content

Evaluating the efficiency of Metric Mapping algorithm to mitigate black-box membership inference attack.

Notifications You must be signed in to change notification settings

aayushhyadav/Neural-Network-Privacy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mitigating Black-Box Membership Inference Attack using Metric Mapping

Membership Inference Attack

Membership inference attacks can be carried out against neural networks to infer the presence of data records in the training dataset, breaching user privacy. Black-box membership inference attacks can be executed simply by analyzing metrics like entropy, standard deviation, and maximum posterior probability of output prediction vectors, as these metrics have different distributions for members and non-members of the training dataset.

Metric Mapping

The Metric Mapping algorithm aims to eliminate the differences in metric distributions of members and non-members by manipulating the output prediction vectors such that the prediction remains the same. Thus, ensuring that utility is not compromised to preserve privacy.

Experimental Analysis

The Jupyter notebooks consist of experiments measuring the effectiveness of metric mapping in mitigating black-box membership inference attacks for different datasets in the presence of class imbalance and overfitting. Furthermore, the algorithm is compared with DP-SGD, DP-Logits, and Dropout techniques with respect to preserving privacy and maintaining utility.

About

Evaluating the efficiency of Metric Mapping algorithm to mitigate black-box membership inference attack.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published