Backdoor attacks on artificial intelligence models pose a significant threat to their integrity and reliability, allowing adversaries to manipulate model behavior through poisoned data with hidden triggers. Federated learning, a technique that enables multiple actors to collaboratively train a model, is particularly vulnerable to such attacks. Researchers have proposed a pre-training backdoor mitigation method, FL-PBM, to detect and prevent these attacks. This approach aims to identify and remove poisoned data before the training process, ensuring the model's reliability and security. The FL-PBM method is designed to work in conjunction with federated learning, providing an additional layer of protection against backdoor attacks. By mitigating these attacks, FL-PBM can help prevent severe consequences in critical applications such as autonomous driving, healthcare, and finance, so the development of effective backdoor mitigation techniques like FL-PBM is crucial for maintaining the trustworthiness of AI models1.