Continual learning models must adapt to new data after deployment, but existing approaches often rely on large datasets or distinct tasks. Researchers have proposed a new method, similarity-aware mixture-of-experts, to improve data efficiency in continual learning1. This approach enables models to learn from limited data and adapt to new tasks with overlapping knowledge. By recognizing similarities between tasks, the model can retain knowledge and apply it to new situations, reducing the need for extensive retraining. This method has significant implications for real-world applications, where data may be scarce or dynamic. The ability to learn continuously and efficiently can be crucial in scenarios where threat activity has geopolitical implications, extending beyond the immediate target. This development matters to practitioners because it can enhance the agility and effectiveness of machine learning models in complex, evolving environments.