Decision tree ensembles, widely used in safety-critical domains, pose verification challenges due to their complexity. A key concern is sensitivity, where minor changes in input features can lead to misclassification. Researchers have proposed a symbolic and compositional approach to quantify sensitivity in these models1. This method enables the analysis of how small perturbations in specific features affect the overall classification outcome. By breaking down the ensemble into its constituent decision trees and examining the interactions between them, this approach provides a more nuanced understanding of the model's behavior. The ability to quantify sensitivity is crucial in high-stakes applications, such as healthcare or finance, where incorrect classifications can have severe consequences. This development matters to practitioners because it can help identify potential vulnerabilities in decision tree ensembles, allowing for more robust and reliable model design.