Large Language Models face numerous conflicts and dilemmas as their power and autonomy grow, prompting a need to understand and address these challenges. Researchers have categorized these conflicts and modeled the models' preferences as a priority graph, where instructions and values are interconnected nodes with context-specific priorities. This framework allows for a nuanced analysis of the complex decision-making processes within LLMs. By examining the priority graph, developers can better comprehend how models weigh competing values and instructions, ultimately informing strategies to mitigate potential conflicts1. The ability to align LLMs with human values and priorities is crucial, as it impacts not only the technology itself but also broader societal implications, including policy, security, and workforce dynamics. So what matters to practitioners is that resolving these dilemmas is essential to ensure LLMs are developed and deployed responsibly, with consideration for their far-reaching consequences.