Large language models are capable of generating highly empathic responses in single-turn interactions, but their ability to maintain diversity in multi-turn dialogue is limited. Research has shown that these models often rely on formulaic patterns, reusing the same lexical and syntactic structures across tasks. This lack of diversity in discourse can lead to repetitive and unengaging interactions. A recent study examined the discourse diversity of large language models in empathic dialogue, highlighting the need for more nuanced and varied responses in multi-turn settings1. The findings suggest that while large language models can produce empathic responses, their formulaic nature can hinder their ability to engage in meaningful and dynamic conversations. This matters to practitioners because developing more sophisticated language models that can adapt to complex conversational contexts is crucial for creating effective and empathic human-computer interactions.
Discourse Diversity in Multi-Turn Empathic Dialogue
⚠️ Critical Alert
Why This Matters
Abstract: Large language models (LLMs) produce responses rated as highly empathic in single-turn settings (Ayers et al., 2023; Lee et al., 2024), yet they are also known to be form
References
- Ayers et al.. (2023). [Article title]. arXiv. https://arxiv.org/abs/2604.11742v1
Original Source
arXiv AI
Read original →