Features

Running a Federation

OpenFL has multiple options for setting up a federation and running experiments, depending on the users needs.

Task Runner

Define an experiment and distribute it manually. All participants can verify model code and FL plan prior to execution. The federation is terminated when the experiment is finished. Formerly known as the aggregator-based workflow. For more info

Interactive

Setup long-lived components to run many experiments in series. Recommended for FL research when many changes to model, dataloader, or hyperparameters are expected. Formerly known as the director-based workflow. For more info

Workflow Interface (Experimental)

Formulate the experiment as a series of tasks, or a flow. Every flow begins with the start task and concludes with end. Heavily influenced by the interface and design of Netflix’s Metaflow, the popular framework for data scientists. For more info

Aggregation Algorithms

FedAvg

Paper: McMahan et al., 2017 Default aggregation algorithm in OpenFL. Multiplies local model weights with relative data size and averages this multiplication result.

FedProx

Paper: Li et al., 2020

FedProx in OpenFL is implemented as a custom optimizer for PyTorch/TensorFlow. In order to use FedProx, do the following:

  1. PyTorch:

  • replace your optimizer with SGD-based openfl.utilities.optimizers.torch.FedProxOptimizer

    or Adam-based openfl.utilities.optimizers.torch.FedProxAdam. Also, you should save model weights for the next round via calling .set_old_weights() method of the optimizer before the training epoch.

  1. TensorFlow:

  • replace your optimizer with SGD-based openfl.utilities.optimizers.keras.FedProxOptimizer.

For more details, see ../openfl-tutorials/Federated_FedProx_*_MNIST_Tutorial.ipynb where * is the framework name.

FedOpt

Paper: Reddi et al., 2020

FedOpt in OpenFL: Adaptive Aggregation Functions

FedCurv

Paper: Shoham et al., 2019

Requires PyTorch >= 1.9.0. Other frameworks are not supported yet.

Use openfl.utilities.fedcurv.torch.FedCurv to override train function using .get_penalty(), .on_train_begin(), and .on_train_end() methods. In addition, you should override default AggregationFunction of the train task with openfl.interface.aggregation_functions.FedCurvWeightedAverage. See PyTorch_Histology_FedCurv tutorial in ../openfl-tutorials/interactive_api directory for more details.

Federated Evaluation

Evaluate the accuracy and performance of your model on data distributed across decentralized nodes without comprimising data privacy and security. For more info

Privacy Meter

Quantitatively audit data privacy in statistical and machine learning algorithms. For more info