Researchers have discovered that large language models (LLMs) can effectively deanonymize individuals based on their online posts, posing a significant threat to online anonymity. By analyzing posts on platforms such as Hacker News, Reddit, and LinkedIn, LLMs can identify users with high precision, even when faced with tens of thousands of potential candidates. This capability is particularly concerning, as it bypasses the need for human investigators to manually search for clues and piece together identities. The use of LLMs in deanonymization efforts can process vast amounts of unstructured data, making it a powerful tool for uncovering anonymous online activity. This development has significant implications for online privacy, as individuals may no longer be able to assume their online posts are anonymous1. The ability of LLMs to scale deanonymization efforts to large datasets raises concerns about the potential for widespread identification of anonymous users. As a result, individuals and organizations must reevaluate their online security measures to protect against this emerging threat. The growing capability of LLMs to compromise online anonymity matters to security practitioners, as it underscores the need for proactive measures to safeguard online identities and maintain confidentiality in the face of increasingly sophisticated deanonymization techniques.