Diffusion language models have gained traction as a viable alternative to traditional autoregressive models for language modeling, offering flexible token generation and parallel processing capabilities. However, this flexibility poses a unique challenge: determining the optimal decoding strategy, which dictates the order and number of tokens generated at each iteration. Researchers have now demonstrated that confidence-based decoding is a provably efficient approach for diffusion language models1. This method enables the model to generate tokens in a sequence that maximizes confidence, thereby improving overall efficiency. By adopting this decoding strategy, diffusion language models can mitigate the complexity introduced by their flexible generation capabilities. The efficiency of confidence-based decoding has significant implications for practitioners, as it can enhance the performance and scalability of diffusion language models in various applications, making them a more attractive option for large-scale language modeling tasks.