Federated learning's collaborative model training is hindered by noisy labels across distributed clients, significantly degrading performance. To address this, researchers propose FedSIR, a multi-stage framework designed for robust federated learning under noisy labels1. FedSIR differs from existing approaches by not solely relying on noise-tolerant loss functions or exploiting specific data characteristics. Instead, it focuses on spectral client identification and relabeling to improve model training. This approach enables more accurate client data identification and label correction, leading to enhanced overall model performance. The presence of noisy labels poses a significant challenge in federated learning, as it can lead to biased models and poor predictive accuracy. So, the development of frameworks like FedSIR matters to practitioners, as it can help improve the reliability and accuracy of federated learning models in real-world applications.