A new tool, `scicode-lint`, addresses a significant vulnerability in scientific Python code by identifying "methodology bugs" that produce seemingly plausible yet incorrect results. Unlike traditional linters or static analysis, which fail to detect these subtle errors, `scicode-lint` specifically targets flaws in scientific methodologies. Previous attempts at ML-specific linters demonstrated the feasibility of detection but faced substantial sustainability challenges, including tight dependencies on particular Python or `pylint` versions and a reliance on extensive manual engineering for pattern generation1. `scicode-lint` circumvents these limitations by leveraging Large Language Models (LLMs) to automatically generate the necessary detection patterns. This LLM-driven strategy aims to enhance the tool's adaptability and reduce the manual effort required for maintenance. Ensuring the integrity of scientific code, especially in AI and machine learning contexts, is critical, as undetected methodological flaws can lead to compromised research findings and misinformed decisions across policy, security, and technological development.