Hallucinations in Video Large Language Models (Vid-LLMs) pose a significant challenge, as they generate outputs that appear realistic but contradict the input video content. A recent survey1 provides a comprehensive analysis of this issue, introducing a taxonomy that categorizes hallucinations into two primary types. This taxonomy enables a deeper understanding of the complexities surrounding Vid-LLMs and their potential to produce distorted or fabricated information. The survey's findings highlight the need for improved models that can accurately capture and represent video content, reducing the occurrence of hallucinations. As Vid-LLMs continue to advance, their applications will extend beyond technology, impacting policy, security, and workforce dynamics. The ability to identify and mitigate hallucinations is crucial, as it directly affects the reliability and trustworthiness of these models, so understanding and addressing this challenge is essential for practitioners working with Vid-LLMs.