Multimodal Continual Instruction Tuning (MCIT) is hindered by catastrophic forgetting, a phenomenon that severely restricts the sequential task adaptation of Multimodal Large Language Models (MLLMs). Researchers have identified a dual-forgetting phenomenon that occurs across both perception drift in Cross-modal Projection Space and the reasoning language backbone, exacerbating the issue. The introduction of MAny, a Merge Anything approach, aims to address this problem by enabling more effective adaptation to new tasks. This approach has significant implications for the development of MLLMs, as it can help mitigate the effects of catastrophic forgetting and improve overall performance. The MAny method allows for the integration of new information while preserving existing knowledge, making it a crucial component in the development of more robust and adaptable MLLMs1. This matters to practitioners because it can enhance the ability of MLLMs to handle complex, dynamic tasks, ultimately leading to more effective and reliable multimodal processing.