Researchers have made a significant breakthrough in enhancing the trustworthiness of Large Language Models (LLMs) by leveraging their parametric knowledge for fact-checking without relying on external knowledge retrieval. This approach enables LLMs to verify the accuracy of natural language claims from various sources, including human-written text and web content, using only their internal knowledge parameters. By bypassing the need for external evidence retrieval, this method streamlines the fact-checking process and reduces reliance on potentially biased or outdated information. The study's findings have important implications for the development of more reliable and trustworthy AI systems, particularly in applications where factuality is crucial1. This advancement matters to practitioners because it has the potential to significantly improve the accuracy and efficiency of AI-powered fact-checking, which is critical in various domains, including cybersecurity, journalism, and policy-making.