Large language model (LLM) agents, acting as human delegates in multi-agent environments, are susceptible to social dynamics that compromise their objective decision-making capabilities. Research has identified four key social phenomena that undermine the reliability of these representative agents, including social conformity, which can lead to biased decision-making. The social context of an LLM's network can significantly impact its ability to integrate diverse peer perspectives, ultimately affecting the final decision. This vulnerability is particularly concerning in environments where LLMs are used to make critical decisions, such as in DeFi applications. The security implications of LLM developments are far-reaching, and understanding these social dynamics is crucial to mitigating potential risks1. As LLMs continue to evolve and play a larger role in decision-making processes, addressing these vulnerabilities is essential to ensuring the integrity and reliability of these systems, and thus, the security of the environments in which they operate.