Researchers have developed an artificial intelligence-powered guide to enhance virtual reality accessibility for blind and low vision individuals. This large language model-powered tool aims to assist users in navigating virtual environments and provide responses to their queries. A study involving 16 participants with visual impairments was conducted to assess the effectiveness of this AI-powered guide. The results of this study are crucial in understanding how such technology can be leveraged to make social virtual reality more inclusive. By leveraging large language models, this guide has the potential to significantly improve the VR experience for blind and low vision users, enabling them to fully participate in virtual interactions1. This breakthrough matters to practitioners and informed readers because it highlights the potential of AI-driven solutions to address longstanding accessibility challenges in the virtual reality space.