Federated Learning (FL) is a decentralized approach to machine learning where models are collaboratively trained on data residing across multiple devices, such as IoT systems or edge devices, rather than transferring raw data to a central server. Instead of sending data to a central server, each client trains the model locally on its own data and only shares model updates (like gradients or weights) with the server.
Four general steps in Federated Learning:
However, FL faces critical challenges in terms of privacy, explainability, and model accuracy across heterogeneous devices, especially in IoT applications where devices may differ significantly in computational power and data characteristics. A major issue is the balancing act between data privacy and access; FL restricts access to raw data by keeping it decentralized, but this hampers model accuracy. Another significant concern is explainability: conventional FL models, especially those using complex neural networks, lack transparency, which limits their adoption in domains requiring clear model decision paths~\cite{information_infusion_FL}. Additionally, asynchronous and adaptive FL frameworks face unique obstacles, such as communication bottlenecks and data staleness in IoT environments. These issues, alongside the need to manage malicious model updates (backdoor attacks), highlight the complexity of deploying FL effectively across distributed networks~\cite{anodyne_mitigating_backdoor_attacks}.
Different strategies and frameworks have been proposed that provide insights into FL's evolving landscape and offer solutions to tackle these challenges.
In this report, Section 2 will examine related work in FL and related distributed learning approaches. Section 3 delves into specific FL challenges and describes solution approaches from the selected studies. Section 4 provides a comparative analysis of these solutions, highlighting commonalities and divergences. Subsequent sections will address the legal and ethical implications of FL deployment, while the final section will outline conclusions and potential directions for future research.
The paper "Advancing Federated Learning: Optimizing Model Accuracy through Privacy-Conscious Data Sharing" proposes a privacy-preserving data-sharing mechanism to improve model accuracy, balancing privacy with the need for data access(Advancing_Federated_Lea…). Another paper, "An Adaptive Asynchronous Federated Learning Framework for Heterogeneous IoT," introduces an adaptive FL approach that addresses data staleness and communication constraints across varied IoT devices(An adaptive asynchronou…). The study on "ANODYNE: Mitigating Backdoor Attacks in Federated Learning" presents a defense mechanism that enhances model robustness against continuous backdoor attacks, focusing on IoT security(ANODYNE_ Mitigating bac…). Finally, "Increasing Trust in AI through Privacy Preservation and Model Explainability" discusses integrating Fuzzy Regression Trees into FL for inherently interpretable models, supporting both privacy and transparency(information-infusion).
In this report, Section 2 will examine related work in FL and related distributed learning approaches. Section 3 delves into specific FL challenges and describes solution approaches from the selected studies. Section 4 provides a comparative analysis of these solutions, highlighting commonalities and divergences. Subsequent sections will address the legal and ethical implications of FL deployment, while the final section will outline conclusions and potential directions for future research.