Large language models (LLMs) have shown impressive capabilities, but they fall short in open-domain implicit question-answering tasks due to limitations in domain knowledge and one-shot generation. This deficiency stems from the fact that LLMs may not always have access to the most up-to-date or comprehensive information, hindering their ability to provide accurate answers. To address this issue, researchers have proposed a novel approach that involves gradually excavating external knowledge to enhance the performance of LLMs in implicit question-answering tasks1. This method aims to improve the comprehensiveness and accuracy of LLMs by incorporating external knowledge sources. The proposed approach has significant implications for the development of more robust and reliable LLMs. So what matters to practitioners is that this research highlights the need for more sophisticated knowledge integration techniques to unlock the full potential of LLMs in complex question-answering tasks.