Researchers have introduced SIEVES, a selective prediction method that enhances the performance of multimodal large language models (MLLMs) in visual-language tasks by scoring visual evidence. This approach aims to improve coverage, or the proportion of inputs the system can accurately answer, while maintaining low error rates in out-of-distribution (OOD) scenarios. SIEVES achieves this by generalizing through visual evidence scoring, allowing MLLMs to selectively predict answers based on the reliability of visual cues. By doing so, the model can avoid providing incorrect answers and instead opt for abstaining when the visual evidence is insufficient, thereby increasing overall reliability. This development has significant implications for real-world applications, where accurate and reliable performance is crucial, so the ability to selectively predict answers based on visual evidence scoring matters to practitioners seeking to deploy MLLMs in high-stakes environments1.