Large Language Models (LLMs) are hindered by the traditional "train then deploy" approach, which prevents them from adapting to new information in real-time. Test-Time Training (TTT) offers a solution by updating a subset of model parameters, known as fast weights, during inference. This approach enables LLMs to dynamically respond to changing data streams, enhancing their performance in real-world tasks. The current LLM ecosystem, however, limits the potential of TTT. Researchers propose In-Place Test-Time Training, a method that updates model weights in response to new information without requiring significant changes to the existing architecture1. This approach has significant implications for the development of more adaptive and responsive LLMs. As state-aligned threat activity increases, the ability of LLMs to adapt in real-time becomes crucial, extending beyond the immediate target to geopolitical implications. The development of In-Place Test-Time Training matters to practitioners as it enables the creation of more dynamic and responsive LLMs that can effectively counter emerging threats.