Basic PPFL Training¶
APPFL
provides users with the capabilities of simulating and training PPFL on either a single machine, a cluster, or multiple heterogeneous machines. We refer
simulation as running PPFL experiments on a single machine or a cluster without actual data decentralization
training as running PPFL experiments on multiple (heterogeneous) machines with actual decentralization of client datasets
Hence, we describe two types of PPFL run:
Simulating PPFL is useful for those who develop, test, and validate new models and algorithms for PPFL, whereas Training PPFL for those who consider actual PPFL settings in practice.
Sample template¶
Note
Before reading this section, it is highly recommended to check out Tutorials for more detailed examples in notebooks.
For either simulation or training, a skeleton of the script for running PPFL can be written as follows:
1from appfl.config import Config
2from omegaconf import OmegaConf
3
4def main():
5
6 # load default configuration
7 cfg = OmegaConf.structured(Config)
8
9 # change configuration if needed
10 ...
11
12 # define model, loss, and data
13 model = ... # user-defined model
14 loss_fn = ... # user-defined loss function
15 data = ... # user-defined datasets
16
17 # The choice of PPFL runs
18
19if __name__ == "__main__":
20 main()
Some remarks are made as follows:
Line 8 loads the default configuration for PPFL run. See How to set configuration for more details.
User-defined model, loss, and data can be read as in lines 14 to 16; see How to define model and How to define datasets for more details.
Depending on our choice of PPFL run, we then call API functions in line 12.