Researchers have made a breakthrough in enhancing the safety of large language models (LLMs) by developing a method to ablate refusal behaviors through optimal transport. This approach improves upon existing activation-based jailbreaking methods, which have been shown to circumvent safety mechanisms by removing refusal directions. The new technique recognizes that refusal is a complex, multi-dimensional phenomenon with a rich distributional structure, rather than a one-dimensional issue. By leveraging optimal transport, the method can efficiently identify and remove refusal directions, making LLMs more resilient to attacks. The study's findings have significant implications for the development of safer LLMs, as they can help mitigate the risk of these models being exploited for malicious purposes1. This matters to practitioners because as LLMs become increasingly capable, their risk surfaces expand, and security considerations must keep pace to prevent potential misuse.