Artificial intelligence aggregation significantly impacts social learning by altering the way information is processed and disseminated. When AI-generated outputs are used as training data for future predictions, it can create a self-reinforcing loop that deviates from the optimal outcome. Researchers have extended the DeGroot model to incorporate an AI aggregator that synthesizes signals from population beliefs, which are then fed back to agents, influencing their decision-making. This can lead to a learning gap, defined as the disparity between long-run beliefs and the efficient benchmark. The introduction of AI aggregation can exacerbate this gap, resulting in suboptimal outcomes1. This phenomenon has significant implications for fields where AI is increasingly used to inform decision-making, such as cybersecurity and geopolitics. So what matters to practitioners is that understanding AI aggregation's effects on social learning can help them develop more effective strategies to mitigate potential biases and errors.