A US senator has initiated an investigation into eight prominent tech companies, citing their alleged failure to properly report child sexual abuse material (CSAM) and provide adequate data on generative AI. This inquiry stems from reports by the National Center for Missing and Exploited Children (NCMEC) highlighting the tech giants' deficiencies in CSAM reporting1. The senator's inquiry aims to hold these companies accountable for their role in facilitating or failing to prevent the spread of CSAM. The investigation may lead to increased scrutiny of tech companies' content moderation practices and their compliance with existing laws and regulations. As the use of generative AI continues to grow, the potential for CSAM to spread through these platforms also increases, making it essential for tech companies to prioritize robust reporting and moderation mechanisms. This development matters to cybersecurity practitioners as it underscores the need for proactive measures to prevent the exploitation of emerging technologies.