Search this site
Embedded Files
Skip to main content
Skip to navigation
www.palisadex.net
Home
APPFL
Team
Papers
Usecases
News
www.palisadex.net
Home
APPFL
Team
Papers
Usecases
News
More
Home
APPFL
Team
Papers
Usecases
News
Papers
Differentially Private Federated Learning via Inexact ADMM
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks to communication among the learning agents. The DP techniques, however, hinder achieving a greater learning performance while ensuring strong data privacy. In this paper we develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems with the objective perturbation by random noises generated from a Laplace distribution. We show that our algorithm provides $\barĪµ$-DP for every iteration and $\mathcal{O}(1/T)$ rate of convergence in expectation, where $T$ is the number of iterations. Using MNIST and FEMNIST datasets for the image classification, we demonstrate that our algorithm reduces the testing error by at most $22\%$ compared with the existing DP algorithm, while achieving the same level of data privacy. The numerical experiment also shows that our algorithm converges faster than the existing algorithm.
APPFL: Open-Source Software Framework for Privacy-Preserving Federated Learning
Federated learning (FL) enables training models at different sites and updating the weights from the training instead of transferring data to a central location and training as in classical machine learning. The FL capability is especially important to domains such as biomedicine and smart grid, where data may not be shared freely or stored at a central location because of policy challenges. Thanks to the capability of learning from decentralized datasets, FL is now a rapidly growing research field, and numerous FL frameworks have been developed. In this work, we introduce APPFL, the Argonne Privacy-Preserving Federated Learning framework. APPFL allows users to leverage implemented privacy-preserving algorithms, implement new algorithms, and simulate and deploy various FL algorithms with privacy-preserving techniques. The modular framework enables users to customize the components for algorithms, privacy, communication protocols, neural network models, and user data. We also present a new communication-efficient algorithm based on an inexact alternating direction method of multipliers. The algorithm requires significantly less communication between the server and the clients than does the current state of the art. We demonstrate the computational capabilities of APPFL, including differentially private FL on various test datasets and its scalability, by using multiple algorithms and datasets on different computing environments.
A Privacy-Preserving Distributed Control of Optimal Power Flow
We consider a distributed optimal power flow formulated as an optimization problem that maximizes a nondifferentiable concave function. Solving such a problem by the existing distributed algorithms can lead to data privacy issues because the solution information exchanged within the algorithms can be utilized by an adversary to infer the data. To preserve data privacy, in this paper we propose a differentially private projected subgradient (DP-PS) algorithm that includes a solution encryption step. We show that a sequence generated by DP-PS converges in expectation, in probability, and with probability 1. Moreover, we show that the rate of convergence in expectation is affected by a target privacy level of DP-PS chosen by the user. We conduct numerical experiments that demonstrate the convergence and data privacy preservation of DP-PS.
Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse