Researchers have introduced a novel approach to adapting large pretrained models, dubbed Task-aware Union of Subspaces, which integrates compression and fine-tuning into a single step1. This method deviates from the conventional practice of sequentially applying parameter-efficient fine-tuning and low-rank compression, a process that can lead to suboptimal alignment between the compressed subspace and downstream objectives. By unifying these two processes, the proposed approach enables more efficient use of the global parameter budget. This innovative strategy has significant implications for the field of artificial intelligence, particularly in scenarios where large models are adapted for diverse tasks. The ability to optimize model compression and fine-tuning simultaneously can lead to improved performance and reduced computational requirements. As state-aligned threat activity continues to rise, the development of more efficient and effective AI models takes on increased importance, extending beyond immediate targets to geopolitical implications.