Research reveals that humans anticipate rational and cooperative behavior from Large Language Models (LLMs) when engaging in strategic games. A controlled laboratory experiment, where participants competed in a multi-player p-beauty contest against both human and LLM opponents, demonstrated this expectation. The study's within-subject design allowed for individual-level comparisons, providing insight into human behavior in interactions with LLMs. The findings suggest that humans tend to trust LLMs to act rationally and cooperatively, similar to their expectations of human opponents. This trust has significant implications for the integration of LLMs into social and economic systems, as it may influence human decision-making and behavior in these contexts1. The discovery of this trust dynamic is crucial for practitioners to understand, as it may impact the development and deployment of LLMs in various applications.