Robotic perception systems relying on Deep Neural Networks (DNNs) for semantic segmentation are susceptible to adversarial attacks, posing a significant threat to safety-critical applications. Researchers have found that while DNNs excel in image classification, their vulnerability to attacks in robotic contexts necessitates specialized detection strategies and architectures. The lack of robustness in semantic segmentation can have severe consequences, compromising the reliability of robotic systems. A recent study1 highlights the need for tailored approaches to detect and mitigate adversarial attacks in robotic perception, emphasizing the importance of securing these systems. The development of effective detection methods is crucial to ensure the safe operation of robots in various environments. So what matters to practitioners is that they must prioritize the implementation of robust security measures to prevent adversarial attacks from exploiting vulnerabilities in robotic perception systems.