Releases
1.5.1
We are excited to announce the release of OpenFL 1.5.1 - our first since moving to LF AI & Data! This release brings the following changes.
Highlights
Documentation accessibility improvements: As part of our Global Accessibility Awareness Day (GAAD) Pledge, the OpenFL project is making strides towards more accessible documentation. This release includes the integration of Intel® One Mono font, contrast color improvements, formatting improvements, and new accessibility focused issues to take up in the future.
Documentation to federate a Generally Nuanced Deep Learning Framework (GaNDLF) model with OpenFL
New OpenFL Interactive API Tutorials:
Improvements to workspace export and import
Many documentation improvements and updates
Bug fixes
Fixing dependency vulnerabilities
1.5
Highlights
New Workflows Interface (Experimental) - a new way of composing federated learning experiments inspired by Metaflow. Enables the creation of custom aggregator and collaborators tasks. This initial release is intended for simulation on a single node (using the LocalRuntime); distributed execution (FederatedRuntime) to be enabled in a future release.
New use cases enabled by the workflow interface:
Privacy Meter - Privacy meter, based on state-of-the-art membership inference attacks, provides a tool to quantitatively audit data privacy in statistical and machine learning algorithms. The objective of a membership inference attack is to determine whether a given data record was in the training dataset of the target model. Measures of success (accuracy, area under the ROC curve, true positive rate at a given false positive rate …) for particular membership inference attacks against a target model are used to estimate privacy loss for that model (how much information a target model leaks about its training data). Since stronger attacks may be possible, these measures serve as lower bounds of the actual privacy loss. The Privacy Meter workflow example generates privacy loss reports for all party’s local model updates as well as the global models throughout all rounds of the FL training.
Federated Model Watermarking using the WAFFLE method
Differential Privacy – Global differentially private federated learning using Opacus library to achieve a differentially private result w.r.t the inclusion or exclusion of any collaborator in the training process. At each round, a subset of collaborators are selected using a Poisson distribution over all collaborators, the selected collaborators perform local training with periodic clipping of their model delta (with respect to the current global model) to bound their contribution to the average of local model updates. Gaussian noise is then added to the average of these local models at the aggregator. This example is implemented in two different but statistically equivalent ways – the lower level API utilizes RDPAccountant and DPDataloader Opacus objects to perform privacy accounting and collaborator selection respectively, whereas the higher level API uses PrivacyEngine Opacus object for collaborator selection and internally utilizes RDPAccountant for privacy accounting.
Official support for Python 3.9 and 3.10
EDEN Compression Pipeline: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning (paper link)
Improvements to the resiliency and security of the director / envoy infrastructure:
Optional notification to plan participants to agree to experiment sent to their infrastructure
Improved resistance to loss of network connectivity and failure at various stages of execution
Windows Support (Experimental): Continuous Integration now tests OpenFL on Windows, but certain features may not work as expected. Full Windows support will be added in a future release.
1.4
The OpenFL v1.4 release contains the following:
tf.data Pipeline Example
PrivilegedAggregationFunction
InterfaceFeTS Challenge Task Runner
Bug fixes and other improvements
1.3
The OpenFL v1.3 release contains the following updates:
FedCurv aggregation algorithm
HuggingFace/transformers audio classification example using SUPERB dataset
PyTorch Lightning GAN example
NumPy Linear Regression example in Google Colab
Adaptive Federated Optimization algorithms implementation:
FedYogi
,FedAdagrad
,FedAdam
MXNet landmarks regression example as a custom plugin to OpenFL
Migration to JupyterLab
Bug fixes and other improvements
1.2
The OpenFL v1.2 release contains the following updates:
Long-living entities: Director/Envoy for supporting multiple experiments within the same
Federation
Scalable PKI: semi-automatic mechanism for certificates distribution via step-ca
Examples with new Interactive API + Director/Envoy: TensorFlow Next Word Prediction, PyTorch Re-ID on Market, PyTorch MobileNet v2 on TinyImageNet
3D U-Net TensorFlow workspace for BraTS 2020 for CLI-based workflow
AggregationFunction
interface for custom aggregation functions in new Interactive APIAutocomplete of
fx
CLIBug fixes and documentation improvements
1.1
The OpenFL v1.1 release contains the following updates:
New Interactive Python API (experimental)
Example FedProx algorithm implementation for PyTorch and Tensorflow
AggregationFunctionInterface
for custom aggregation functionsAdds a Keras-based NLP Example
Fixed lossy compression pipelines and added an example for usage
Bug fixes and documentation improvements
1.0.1
v1.0.1 is a patch release. It includes the following updates:
New docker CI tests
New Pytorch UNet Kvasir tutorial
Cleanup / fixes to other OpenFL tutorials
Fixed description for Pypi
Status/documentation/community badges for README.md
1.0
This release includes:
The official open source release of OpenFL
Tensorflow 2.0 and PyTorch support
Examples for classification, segmentation, and adversarial training
No-install Docker and Singularity* deployments
Python native API intended for single node federated learning experiments
fx
CLI for multi-node production deploymentsAdditional test coverage for OpenFL components
* Singularity supported via DockerHub integration: singularity shell docker://openfl:latest