Researchers have identified a critical limitation in Safe Reinforcement Learning from Human Feedback (RLHF) methods, which typically rely on expected cost constraints to enforce safety. This approach fails to account for distributional uncertainty, particularly in scenarios with heavy tails or rare catastrophic events, posing significant risks when robustness and risk sensitivity are crucial. To address this, a new framework leveraging stochastic dominance for universal spectral risk control has been proposed, enabling more comprehensive and reliable safety guarantees. This development is particularly significant in the context of state-aligned activity involving reinforcement learning, where the threat model shifts from criminal to geopolitical, necessitating a distinct approach. The proposed framework offers a more nuanced understanding of risk, allowing for more effective mitigation of potential threats, so what matters most to practitioners is that this new framework can help them develop more robust and risk-sensitive RLHF systems1.