Medication errors pose a significant threat to patient safety, and pharmacist verification is a critical final safeguard. However, the direct application of Large Language Models (LLMs) to prescription verification is problematic due to their factual unreliability and lack of traceability1. To address these challenges, a hybrid knowledge-grounded framework called PharmGra has been introduced, aiming to enhance safety and traceability in prescription verification. This framework combines the capabilities of LLMs with knowledge graph-based approaches to provide more accurate and transparent results. The development of such frameworks is crucial, as LLMs are increasingly being explored for applications in high-stakes domains. The security implications of LLMs, particularly in the context of prescription verification, cannot be overlooked, and frameworks like PharmGra can help mitigate associated risks. The introduction of PharmGra has significant implications for practitioners, as it highlights the need for careful consideration of LLMs' limitations and the importance of developing more reliable and transparent AI systems.