Researchers have introduced MeMo, a modular framework that enables large language models to incorporate new knowledge efficiently. This development addresses a significant limitation of current models, which remain static after pretraining and require substantial updates to adapt to new information. MeMo achieves this by encoding new knowledge in a modular fashion, allowing for more flexible and timely domain-specific updates. The framework has significant implications for real-world applications that require access to the latest information, such as question-answering systems and conversational AI models. By providing a mechanism for efficient knowledge updates, MeMo has the potential to improve the performance and relevance of large language models in a wide range of tasks1. This matters to practitioners because it enables them to develop more accurate and informative AI systems that can keep pace with changing circumstances and user needs.