Project: #35
Secure Aggregation Protocols for Federated Learning: Ensuring Privacy and Integrity in Decentralized Model Updates
Kmsazid
Federated learning, a promising approach for training ML models on decentralized devices while preserving data privacy, faces challenges in secure model aggregation. This proposal aims to develop novel cryptographic techniques for secure aggregation, ensuring both privacy and integrity.
Federated learning relies on aggregating model updates from numerous decentralized devices, each containing sensitive data. Existing aggregation methods risk exposing individual updates, risking privacy breaches. Additionally, malicious participants may attempt to manipulate the process, compromising model integrity. Secure aggregation protocols are necessary to protect data privacy and ensure model integrity. This research aims to develop cryptographic protocols for secure model update aggregation in federated learning, ensuring privacy and integrity. It involves designing mechanisms to protect individual update privacy, verify model integrity, and evaluating protocol performance in real-world scenarios. To achieve this, we will investigate cryptographic techniques such as SMPC, homomorphic encryption, and differential privacy for their suitability in federated learning. Subsequently, we will develop tailored cryptographic protocols, considering factors such as communication overhead, computational complexity, and privacy guarantees. The implementation of these protocols will be done using compatible frameworks, ensuring seamless integration with existing federated learning systems. Expected outcomes include the development of novel secure aggregation protocols, empirical evidence of their effectiveness, and insights into the trade-offs involved in federated learning. The significance of this project is in its potential to enable privacy-preserving, trustworthy federated learning systems, enhancing data privacy, and promoting collaboration while safeguarding sensitive information.