Project: #102
Developing robust defence mechanisms against attacks in federated learning
Federated Learning (FL) is an innovative approach to distributed machine learning (ML), where data remains decentralized, allowing multiple clients to collaboratively train a shared ML model without the need to exchange raw data [1], [2]. This approach preserves data privacy and reduces communication overhead, making it well-suited for real-world applications such as healthcare, finance, and mobile services.
Despite these advantages, FL is vulnerable to a range of security threats, including data and model poisoning, inference attacks, Byzantine attacks, backdoor attacks, Sybil attacks,and other adversarial manipulations that can compromise both the model's integrity and clients’ data privacy [3], [4]. As FL becomes increasingly;important in sensitive applications, it is crucial to develop robust defence mechanisms to protect FL systems from both internal and external threats.
This project aims to design, implement, and evaluate robust defence strategies—such as aggregation techniques, client selection mechanisms, optimizer adjustments, and differential privacy methods—that can detect and mitigate attacks in FL environments. This will ensure the secure and reliable operation of FL systems [4], [5].
The key objectives of the project are as follows.
● Attack detection: Develop advanced methods to identify and classify various attack types during the FL process, enabling the design of specific defence mechanisms.
● Attack mitigation: Create strategies to reduce the impact of detected attacks, including retraining compromised models, using client selection techniques, and improving global aggregation methods.
●Model robustness: Strengthen model robustness with advanced regularization and optimizer modifications to minimize the effect of adversarial updates.;
● Evaluation and validation: Conduct extensive experiments to evaluate the effectiveness of the proposed defence mechanisms and compare them with;state-of-the-art methods.
References:
[1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017). “Communication-efficient;learning of deep networks from decentralized data”, International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, Florida, USA.
[2] L.Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and P. S. Yu (2024). “Privacy and robustness in federated learning: Attacks and defenses,” IEEE;Transactions on Neural Networks and Learning Systems, vol. 35, no. 7, pp. 8726-8746.
[3] Y. Sun, H. Ochiai, and J. Sakuma (2024). “Attacking-distanceaware attack: Semi-targeted model poisoning on federated learning,” IEEE Transactions on Artificial Intelligence, vol. 5, no. 2, pp. 925-939.
[4] K. Pillutla,S. M. Kakade, and Z. Harchaoui (2022). “Robust aggregation for federated learning”, IEEE Transactions on Signal Processing, vol. 70, pp. 1142-1154.
[5]H. Zeng, J. Li, J. Lou, S. Yuan, C. Wu, W. Zhao, S. Wu, and Z. Wang (2024). “BSR-FL: An efficient Byzantine-robust privacy-preserving federated;learning framework,” IEEE Transactions on Computers, vol. 73, no. 8, pp. 2096-2110.