The increasing use of Large Language Models (LLMs) in software engineering has led to a significant surge in computational costs, making them unsustainable. These models, built with transformer architectures, are not only large and slow to deploy but also memory-intensive and carbon-heavy, threatening the scalability and accessibility of AI-powered software engineering. Researchers have proposed a green compression pipeline to mitigate these issues, focusing on reducing the environmental impact of LLMs1. This approach aims to compress overgrown language models, making them more efficient and environmentally friendly. The security implications of LLMs are also a concern, as their development and deployment can introduce new risks. The growing dependence on these models necessitates a more sustainable and secure approach to their development and use, so what matters most to practitioners is finding a balance between capability and risk to ensure the long-term viability of AI-powered software engineering.