AI jailbreaking refers to the practice of bypassing restrictions imposed on large language models, enabling users to access unintended functionality. This cat-and-mouse game between AI developers and jailbreakers has its roots in the early days of iPhone hacking, where individuals would exploit vulnerabilities to install unauthorized software. Now, AI labs are facing a similar challenge, as jailbreakers attempt to liberate language models like ChatGPT from their intended constraints. The process involves identifying and exploiting weaknesses in the model's architecture, allowing users to manipulate the AI's behavior and elicit responses that may not be aligned with its original purpose1. As the blockchain ecosystem continues to evolve, AI jailbreaking has significant implications for both technological architecture and financial regulation. The ability to bypass AI restrictions raises important questions about the security and reliability of these models, making it a critical concern for AI developers and practitioners, who must now consider the potential risks and consequences of AI jailbreaking in their own work.