This tool facilitates the evaluation and comparison of machine unlearning methods by providing interactive visualizations and analytical insights. It enables a systematic examination of model behavior through privacy attacks and performance metrics, offering comprehensive analysis of various unlearning techniques.
Try our live demo: Machine Unlearning Comparator
The Machine Unlearning Comparator provides comparison of various baseline methods:
- Fine-Tuning: Leverages catastrophic forgetting by fine-tuning the model on remaining data with increased learning rate
- Gradient-Ascent: Moves in the direction of increasing loss for forget samples using negative gradients
- Random-Labeling: Fine-tunes the model by randomly reassigning labels for forget samples, excluding the original forget class labels
Upload and evaluate your own unlearning methods! The comparator supports custom implementations, enabling you to:
- Benchmark your novel approaches against established baselines
- Upload your custom unlearning implementations for comparison
- Compare results using standardized evaluation metrics and privacy attacks
It includes various visualizations and evaluations through privacy attacks to assess the effectiveness of each method.
-
Install Dependencies Using Hatch
hatch shell
-
Start the Backend Server
hatch run start
-
Install Dependencies Using pnpm
pnpm install
-
Start the Frontend Server
pnpm start