Federated Learning enables decentralized training of machine learning models on data that remains with clients, ensuring privacy and data security
significant challenges in Federated Learning (FL), particularly focusing on balancing privacy, transparency, and model explainability within AI systems. The key problems identified include:
These challenges are crucial because they address the core principles of privacy and transparency, which are essential for user trust and regulatory compliance in AI systems.
the security and robustness challenges in Federated Learning (FL), specifically the vulnerability of FL models to backdoor attacks
existing defenses—such as statistical-based, filter-based, and differential privacy approaches—often fall short against sophisticated backdoor attacks, especially under continuous attacks where malicious updates are introduced in every training round
Traditional FL approaches face obstacles such as communication bottlenecks, staleness of updates, and non-IID (non-independent and identically distributed) data among IoT devices.
the challenge of implementing secure and efficient machine learning (ML) models within resource-constrained IoT networks. Given the rise of cyberattacks targeting IoT devices and the challenges posed by encrypted network traffic, existing centralized ML solutions have proven inadequate for IoT environments. Traditional centralized approaches often struggle to analyze and detect malicious behaviors in real-time, especially in distributed and encrypted IoT settings