A widely-used open-source AI project, LiteLLM, has been compromised by credential harvesting malware, putting millions of users at risk. The malware infection was discovered after a security compliance review conducted by Delve. The breach highlights the vulnerability of open-source projects to cyber threats, particularly those that handle sensitive user data. The fact that LiteLLM was infected with malware underscores the need for rigorous security testing and validation in AI projects1. The incident also raises concerns about the potential for similar breaches in other open-source AI projects. Technical details of the malware and the vulnerability it exploited have not been disclosed, but the incident serves as a reminder of the importance of prioritizing security in AI development. The compromise of LiteLLM's security has significant implications for practitioners, as it underscores the need for enhanced security measures to protect against malware and other cyber threats.