Toward privacy-preserving, secure, and fair federated learning

Author(s)Liu, Zheyuan
Date Accessioned2024-10-29T16:44:19Z
Date Available2024-10-29T16:44:19Z
Publication Date2024
SWORD Update2024-10-13T19:04:46Z
AbstractFederated learning is a collaborative machine learning approach that enables multiple clients to train a shared model while keeping their local data private. This method addresses privacy concerns by ensuring that sensitive data remains decentralized, thus reducing the risk of data breaches. By leveraging the collective knowledge of diverse datasets, federated learning enhances model performance and generalization, making it particularly valuable in scenarios where data privacy and security are paramount. This dissertation aims to enhance federated learning by addressing three key challenges: privacy preservation, security against malicious attacks, and fairness across diverse demographic groups. ☐ First, we propose a novel privacy-preserving federated learning mechanism. While local datasets remain private, intermediate model parameters can still leak sensitive information. Existing solutions either add noise, which reduces model accuracy, or use inefficient cryptographic techniques. Our method employs two non-colluding servers and efficient cryptographic primitives for secure aggregation, maintaining privacy without sacrificing accuracy or efficiency. ☐ Second, we present a secure federated learning method to defend against malicious clients. Federated learning is vulnerable to Byzantine attacks, where malicious clients corrupt their local data or updates to degrade the global model. Current methods either ineffectively filter malicious updates or rely on a potentially biased trusted dataset. Our method evaluates client trust based on similarity to a trusted dataset and incrementally builds a trusted client set. This approach effectively defends against Byzantine attacks, achieving high model accuracy even with an initially biased trusted dataset. ☐ Finally, we introduce a fair federated learning method to ensure equitable accuracy across demographic groups. Machine learning models often favor majority groups due to data imbalances or biased training. Existing fairness methods are typically centralized and require access to the entire dataset, making them unsuitable for federated learning. Our approach guarantees fairness by exchanging minimal additional data among clients, preserving the global model's utility while ensuring equitable accuracy across all groups.
AdvisorZhang, Rui
DegreePh.D.
DepartmentUniversity of Delaware, Department of Computer and Information Sciences
DOIhttps://doi.org/10.58088/xp01-3b72
Unique Identifier1499602197
URLhttps://udspace.udel.edu/handle/19716/35481
Languageen
PublisherUniversity of Delaware
URIhttps://www.proquest.com/pqdtlocal1006271/dissertations-theses/toward-privacy-preserving-secure-fair-federated/docview/3116144098/sem-2?accountid=10457
KeywordsByzantine attacks
KeywordsFairness
KeywordsFederated learning
KeywordsPrivacy
KeywordsRobustness
KeywordsSecurity
TitleToward privacy-preserving, secure, and fair federated learning
TypeThesis
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liu_udel_0060D_16232.pdf
Size:
1.89 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.22 KB
Format:
Item-specific license agreed upon to submission
Description: