Skip to content

v1.5

Compare
Choose a tag to compare
@psfoley psfoley released this 25 Jan 22:56
· 419 commits to develop since this release
e099e28

Highlights

We are excited to announce the release of OpenFL 1.5! This release brings the following changes:

  • New Workflows Interface (Experimental) - a new way of composing federated learning experiments inspired by Metaflow. Enables the creation of custom aggregator and collaborators tasks. This initial release is intended for simulation on a single node (using the LocalRuntime); distributed execution (FederatedRuntime) to be enabled in a future release.
  • New use cases enabled by the workflow interface:
    • End-of-round validation with aggregator dataset
    • Privacy Meter - Privacy meter, based on state-of-the-art membership inference attacks, provides a tool to quantitatively audit data privacy in statistical and machine learning algorithms. The objective of a membership inference attack is to determine whether a given data record was in the training dataset of the target model. Measures of success (accuracy, area under the ROC curve, true positive rate at a given false positive rate ...) for particular membership inference attacks against a target model are used to estimate privacy loss for that model (how much information a target model leaks about its training data). Since stronger attacks may be possible, these measures serve as lower bounds of the actual privacy loss. The Privacy Meter workflow example generates privacy loss reports for all party's local model updates as well as the global models throughout all rounds of the FL training.
    • Vertical Federated Learning Examples
    • Federated Model Watermarking using the WAFFLE method
    • Differential Privacy – Global differentially private federated learning using Opacus library to achieve a differentially private result w.r.t the inclusion or exclusion of any collaborator in the training process. At each round, a subset of collaborators are selected using a Poisson distribution over all collaborators, the selected collaborators perform local training with periodic clipping of their model delta (with respect to the current global model) to bound their contribution to the average of local model updates. Gaussian noise is then added to the average of these local models at the aggregator. This example is implemented in two different but statistically equivalent ways – the lower level API utilizes RDPAccountant and DPDataloader Opacus objects to perform privacy accounting and collaborator selection respectively, whereas the higher level API uses PrivacyEngine Opacus object for collaborator selection and internally utilizes RDPAccountant for privacy accounting.
  • Habana Accelerator Support
  • Official support for Python 3.9 and 3.10
  • EDEN Compression Pipeline: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning (paper link)
  • FLAX Framework Support
  • Improvements to the resiliency and security of the director / envoy infrastructure:
    • Optional notification to plan participants to agree to experiment sent to their infrastructure
    • Improved resistance to loss of network connectivity and failure at various stages of execution
  • Windows Support (Experimental): Continuous Integration now tests OpenFL on Windows, but certain features may not work as expected. Full Windows support will be added in a future release.

Breaking Changes

  • Removal of Python 3.6 support due to numpy requirements
  • Removal of FastEstimator examples due to dependency package incompatibility with OpenFL

What's Changed

New Contributors

Full Changelog: v1.4...v1.5