Large language models can generate highly convincing text, posing significant risks such as phishing and academic dishonesty. To mitigate these threats, researchers have developed algorithms to detect AI-generated text, alongside constructing relevant datasets. A new benchmark, C-ReD, has been introduced to address the lack of comprehensive datasets for Chinese language models. This benchmark is derived from real-world prompts, providing a more accurate representation of AI-generated text detection challenges. The development of C-ReD is crucial as it fills a significant gap in the existing research, particularly in the Chinese language domain1. This new benchmark enables more effective evaluation and training of AI-generated text detection models, ultimately enhancing the security and integrity of digital content. The introduction of C-ReD matters to practitioners as it allows them to develop more robust detection systems, reducing the risks associated with AI-generated text.