Researchers have introduced HorizonMath, a comprehensive benchmark consisting of over 100 unsolved mathematical problems across eight domains in computational and applied mathematics. This benchmark is designed to assess the capabilities of large language models in performing novel mathematical research and making progress on important, unresolved problems. By leveraging automatic verification, HorizonMath aims to evaluate the effectiveness of AI systems in sophisticated mathematical and scientific reasoning. The benchmark's scope and complexity provide a robust testing ground for AI's potential to drive breakthroughs in mathematical discovery1. The development of HorizonMath marks a significant step towards understanding the limitations and possibilities of AI in advancing mathematical knowledge. So what matters to practitioners is that HorizonMath offers a standardized framework to gauge AI's mathematical capabilities, enabling more informed decisions about the role of AI in accelerating mathematical innovation.