This page highlights major milestones, software releases, and capability expansions from the APPFL (Advanced Privacy-Preserving Federated Learning) ecosystem over the past four years. APPFL serves as a core technology pillar within PALISADE-X, enabling scalable, privacy-preserving, and federated AI across cloud, HPC, and secure computing environments.
Key highlights
Experimental support for GA4GH Task Execution Service (TES) within APPFL communicators.
Integration of Fed-SB to enable efficient federated tuning of large foundation models.
New tutorials demonstrating federated training of grid foundation models using real power-grid datasets.
Key highlights
Improved memory efficiency across FL servers, clients, and communicators.
Enhanced examples for scaling federated learning across nodes and GPUs.
Bug fixes and documentation improvements.
Key highlights
Documentation and tutorials for CADRE (Customizable Assurance of Data Readiness) modules.
Expanded FLamby tutorials for both AWS and HPC deployments.
Improved guidance for Globus-based authentication.
Key highlights
Introduction of a data readiness agent to assess AI-readiness prior to federated training.
Support for running APPFL tutorials on AWS SageMaker.
CI integration with ALCF GitLab and GPU testing workflows.
Key highlights
Ray-based communicator backend with documentation.
Optional MPI dependency to simplify installation.
Tutorials for international federations and shared Globus Compute endpoints.
Google Colab support for federated learning demonstrations.
Key highlights
ProxyStore-based communications.
MONAI integration for medical imaging workflows.
Multi-GPU PyTorch DDP support.
Colab-based federated learning tutorials.
Key highlights
Improved experiment metadata and client naming.
WandB integration for training metrics.
Documentation for running APPFL on ALCF Polaris.
While no new major version tags were issued during 2024, this year marked a significant maturation phase for APPFL:
Publication of a comprehensive APPFL framework paper describing architectural design, extensibility, and privacy-preserving capabilities.
Expansion of tutorials, benchmarks, and deployment guidance that informed the subsequent 1.x release series.
Continued adoption across biomedical, power-grid, and cross-institutional federated learning use cases.
Framework evolution and refactoring
Major internal refactoring to improve modularity and extensibility.
Expansion of APIs for custom trainers, aggregators, and communication backends.
Improved documentation and examples for federated learning at scale.
This work laid the foundation for the APPFL 1.x release series.
Initial APPFL release
Open-source release of the APPFL framework.
Introduction of privacy-preserving, decentralized model training without centralized data aggregation.
Early adoption in sensitive biomedical and scientific data environments.
APPFL Releases and Changelog:
https://github.com/APPFL/APPFL/releases
APPFL Documentation and Tutorials:
https://appfl.ai