Research has shown that the trustworthiness of a source significantly influences the use of evidential morphology in the Turkish language. A recent study examined how large language models (LLMs) and humans respond to variations in source trustworthiness, specifically in the context of the past-domain contrast between -DI and -mIs suffixes. The experiment involved manipulating the perceived reliability of the information source, categorizing it as either High-Trust or Low-Trust, and assessing the subsequent impact on language usage. The findings indicate that both humans and LLMs demonstrate sensitivity to source trustworthiness, although the extent of this sensitivity differs between the two1. This discovery has significant implications for the development of more sophisticated language models that can accurately capture the nuances of human language and reasoning. The ability of LLMs to track source sensitivity is crucial for applications where trust and credibility are paramount, so understanding these dynamics is essential for practitioners working on AI systems that interact with humans.