Music tagging performance can be significantly enhanced by combining multiple audio features, but existing deep learning-based methods often compromise interpretability. A novel approach utilizes Genetic Programming (GP) to create composite features, mathematically merging base music features to capture complex interactions while maintaining transparency. This GP pipeline automatically evolves these features, allowing for a more nuanced understanding of the relationships between different audio characteristics. By preserving interpretability, this method enables a deeper comprehension of the music tagging process, facilitating more informed decision-making. The use of GP also enables the identification of synergistic interactions between features, which can lead to improved model performance. This development has significant implications for applications relying on music tagging, such as music recommendation systems, so practitioners can expect improved model transparency and performance1.