Researchers have identified a critical limitation in large language models (LLMs), wherein updating parameters to accommodate downstream tasks can lead to catastrophic forgetting and reduced adaptability. This issue arises because task-specific information is absorbed into the model's parameters, compromising its ability to learn and adapt to new tasks. In contrast, in-context learning with fixed parameters enables rapid and cost-effective adaptation to specific requirements, such as prompt optimization1. This approach allows LLMs to maintain their plasticity and avoid the pitfalls of catastrophic forgetting. The implications of this research extend beyond the realm of natural language processing, as state-aligned threat activity can escalate the stakes from criminal to geopolitical. As a result, developing LLMs that can continually adapt without sacrificing their core capabilities is crucial for mitigating potential threats and ensuring the long-term viability of these models. This matters to practitioners because it highlights the need for more resilient and adaptable LLM architectures.