Abstract: Federated Learning (FL) is considered to be a suitable mechanism for privacy-preserving and distributed machine learning. While preserving decentralized data, it simultaneously protects global parameter updates. However, with all this advantage, FL reduces many securityprovisions thus opening a gateway for adversary AI. Here the adversary can manipulate local model updates or poison decentralized training data, hijacking the system. Some challenges that comeupincludemodelpoisoning,backdoorattacks......
Key Word: Federated Learning, Adversarial AI, Model Poisoning, Backdoor Attack, Gradient Inversion, Robust Aggregation, Anomaly Detection, Differential Privacy, Explainable AI (XAI), Trustworthy AI.
[1]. Ma, C., Li, J., Wei, K., Liu, B., Ding, M., Yuan, L., ...& Poor, H. V. (2023). Trusted AI in multiagent systems: An overview of privacy and security for distributed learning. Proceedings of the IEEE, 111(9), 1097-1132.
[2]. Lyu, L., Yu, H., Ma, X., Chen, C., Sun, L., Zhao, J., ...& Yu, P. S. (2022). Privacy and robustness in federated learning: Attacks and defenses. IEEE Transactions on Neural Networks and Learning Systems.
[3]. Raza, A. (2023). Secure and privacy-preserving federated learning with explainable artificial intelligence for smart healthcare system. University of Kent (United Kingdom).
[4]. Kumar, K. N., Mohan, C. K., &Cenkeramaddi, L. R. (2023). The impact of adversarial attacks on federated learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(5), 2672-2691.
[5]. Gittens, A., Yener, B., & Yung, M. (2022). An adversarial perspective on accuracy, robustness, fairness, and privacy: Multilateral tradeoffs in trustworthy ML. IEEE Access, 10, 120850-120865.