Physics-informed neural networks (PINNs) have emerged as a powerful tool for solving partial differential equations, but their lack of transparency into prediction errors has hindered their adoption. To address this, researchers have proposed a lightweight method for estimating errors in PINNs using finite difference methods1. This approach enables the quantification of prediction uncertainty, which is crucial for establishing trust in PINNs. By leveraging finite difference methods, users can gain insight into the deviations between PINN predictions and the true solution, ultimately enhancing the reliability of these models. The development of such error estimation techniques is significant, as it can facilitate the broader acceptance of PINNs in fields where accuracy and precision are paramount. This matters to practitioners because it can help them better understand the limitations of PINN models and make more informed decisions about their deployment in critical applications.