APPFL Framework
Advanced Privacy Preserving Federated Learning FrameworkÂ
Advanced Privacy Preserving Federated Learning FrameworkÂ
APPFL (Advanced Privacy-Preserving Federated Learning) is an open, extensible software framework designed to enable collaborative AI model development across distributed, siloed, and sensitive data environments—without requiring centralized data movement.
Within PALISADE-X, APPFL serves as a core enabling technology for scalable, trustworthy, and privacy-preserving AI across cloud, HPC, and secure computing infrastructures.
Why APPFL?
Modern AI applications increasingly rely on data that is:
Sensitive (clinical, genomic, operational, infrastructure)
Distributed across institutions
Subject to governance, privacy, and compliance constraints
APPFL addresses these challenges by allowing models—not data—to move, enabling collaborative learning while respecting data locality, policy, and trust boundaries.
Supports cross-silo and cross-institution federated learning
Designed for execution across:
HPC systems
Cloud platforms
Hybrid and multi-site deployments
Scales from small federations to large, multi-node GPU workflows
No centralized data aggregation
Configurable privacy mechanisms and secure communication layers
Compatible with trusted execution environments and confidential computing workflows
APPFL is built around clean abstractions that allow users to:
Plug in custom trainers, aggregators, and communication backends
Integrate domain-specific workflows without modifying core infrastructure
Extend the framework for new research and operational needs
APPFL separates federated learning into composable components:
FL Server
Coordinates global training rounds and aggregation logic
FL Clients
Execute local training while data remains in place
Communication Layer
Supports multiple backends (MPI, Ray, ProxyStore, Globus Compute, cloud-native services)
Experiment & Metadata Tracking
Captures model state, metrics, and federation context for reproducibility
This design enables portability across computing environments while maintaining consistent execution semantics.
APPFL has evolved rapidly to support emerging AI and infrastructure needs:
Multi-GPU training support using PyTorch DDP
Memory optimizations across server and client components
Improved support for large-scale federations and long-running workflows
Native support for AWS, Google Cloud, and HPC systems
Tutorials and reference deployments for ALCF systems
Ray-based communicator for elastic execution models
Support for federated tuning of large and foundation models
Integration of Fed-SB for efficient federated optimization
Use cases spanning biomedical AI, power-grid modeling, and scientific ML
Introduction of data readiness agents to assess AI-readiness prior to training
Integration with CADRE to support trustworthy and auditable ML workflows
APPFL is designed for:
Researchers developing new federated algorithms
Infrastructure teams deploying federated AI as a service
Domain scientists applying federated learning without deep systems expertise
Key features include:
Python-native APIs
Extensive documentation and tutorials
Example workflows for real-world datasets
Compatibility with common ML tooling and experiment tracking systems
Within PALISADE-X, APPFL enables:
Secure collaboration across institutions and sectors
Federated AI pipelines spanning edge, cloud, and leadership-class computing
Reproducible, policy-aware AI workflows aligned with data governance requirements
APPFL provides the federated learning backbone that allows PALISADE-X to operate as a trusted, scalable, and future-proof AI ecosystem.
APPFL Documentation & Tutorials
https://appfl.ai
APPFL Releases & Changelog
https://github.com/APPFL/APPFL/releases