Large language models are capable of generating highly empathic responses in single-turn interactions, but their ability to maintain diversity in multi-turn dialogue is limited. Research has shown that these models often rely on formulaic patterns, reusing the same lexical and syntactic structures across tasks. This lack of diversity in discourse can lead to repetitive and unengaging interactions. A recent study examined the discourse diversity of large language models in empathic dialogue, highlighting the need for more nuanced and varied responses in multi-turn settings1. The findings suggest that while large language models can produce empathic responses, their formulaic nature can hinder their ability to engage in meaningful and dynamic conversations. This matters to practitioners because developing more sophisticated language models that can adapt to complex conversational contexts is crucial for creating effective and empathic human-computer interactions.