Federated learning's collaborative model training is hindered by noisy labels across distributed clients, significantly degrading performance. To address this, researchers propose FedSIR, a multi-stage framework designed for robust federated learning under noisy labels1. FedSIR differs from existing approaches by not solely relying on noise-tolerant loss functions or exploiting specific data characteristics. Instead, it focuses on spectral client identification and relabeling to improve model training. This approach enables more accurate client data identification and label correction, leading to enhanced overall model performance. The presence of noisy labels poses a significant challenge in federated learning, as it can lead to biased models and poor predictive accuracy. So, the development of frameworks like FedSIR matters to practitioners, as it can help improve the reliability and accuracy of federated learning models in real-world applications.
FedSIR: Spectral Client Identification and Relabeling for Federated Learning with Noisy Labels
⚡ High Priority
Why This Matters
Abstract: Federated learning (FL) enables collaborative model training without sharing raw data; however, the presence of noisy labels across distributed clients can severely.
References
- [Author]. (2026, April 22). FedSIR: Spectral Client Identification and Relabeling for Federated Learning with Noisy Labels. *arXiv*. https://arxiv.org/abs/2604.20825v1
Original Source
arXiv AI
Read original →