Name: Linh Tran
Advisors: Ana Milanova and Stacy Patterson
Private and Efficient Federated Learning: Empowering Data-Sensitive Sectors
Abstract:
Federated learning is a decentralized machine learning approach that enables multiple clients to collaboratively train a global model by sharing only model updates (e.g., gradients or model parameters). Despite not transmitting raw data, federated learning remains vulnerable to privacy threats such as inference attacks, where adversaries may reconstruct sensitive information from the shared model updates. To mitigate these risks, differential privacy is employed by adding carefully calibrated noise to the shared components. However, differential privacy often comes with a trade-off in model accuracy, and balancing privacy and accuracy in a federated learning setting remains a persistent challenge. This difficulty is further compounded by the diversity of the threat models. As different adversary models necessitate distinct differential privacy mechanisms, it is often unclear how to formally prove that an algorithm satisfies a desired privacy guarantee. Furthermore, the reality of heterogeneous data distributions and varying model architectures can exacerbate the privacy-accuracy trade-off. Consequently, there is a critical need for private federated learning frameworks that can adapt to these diverse architectural and data needs without sacrificing accuracy. To bridge these gaps, we introduce four novel privacy-preserving federated learning algorithms designed to effectively balance the trade-off in privacy and accuracy. By addressing these challenges, we demonstrate how federated learning can be made truly effective, private, and efficient for the most data-sensitive sectors.