Researchers have introduced Brainstacks, a novel architecture designed to enable continual learning in large language models across multiple domains. This modular framework utilizes frozen adapter stacks that compose additively on a shared base, allowing for efficient fine-tuning of domain-specific expertise. The Brainstacks architecture incorporates five key components, including MoE-LoRA with noisy top-2 routing and QLoRA 4-bit quantization, to facilitate adaptable and scalable learning. By packaging domain knowledge in this manner, Brainstacks can effectively support multi-domain applications, shifting the focus from traditional criminal threats to more complex geopolitical ones1. This development has significant implications for the field, as it necessitates a reevaluation of the threat model and the strategies employed to mitigate potential risks. The emergence of Brainstacks highlights the need for a more nuanced understanding of the interplay between technological advancements and geopolitical dynamics, making it essential for practitioners to reassess their approaches to mitigating threats in this new landscape.