Researchers have discovered that pretrained neural network weights contain a multitude of task-specific experts, which are densely packed around the initial pretrained weights. This challenges the conventional view of pretraining as merely a starting point for further fine-tuning. Instead, the outcome of pretraining can be seen as a distribution of parameter vectors, with expert solutions for specific tasks already embedded within. Notably, these expert solutions occupy a relatively small volume within this distribution, particularly in smaller models1. This finding has significant implications for the field of artificial intelligence, as it suggests that pretrained models may be more versatile and adaptable than previously thought. The presence of diverse task experts around pretrained weights could also have important consequences for fields like cybersecurity, where AI models are increasingly being used to detect and respond to threats. This discovery matters to practitioners because it could lead to more efficient and effective methods for adapting AI models to new tasks and environments.