Large language models are being tested for their ability to assess the novelty of academic papers, a critical aspect of peer review. The introduction of NovBench, a dedicated benchmark, enables the systematic evaluation of these models' capabilities1. This development is crucial as the volume of academic submissions continues to grow, putting a strain on human reviewers. By leveraging NovBench, researchers can fine-tune large language models on peer review data and assess their performance in identifying novel contributions. The evaluation of these models' performance has significant implications for the academic publishing process, potentially alleviating the burden on human reviewers and improving the overall efficiency of peer review. As AI models become more prevalent in academic publishing, their ability to accurately assess novelty will be essential for maintaining the integrity of the review process, so what matters most to practitioners is the potential for NovBench to enhance the reliability and consistency of academic peer review.