Look2React: Making VR NPCs Come Alive with Dynamic Vision-Guided Reactions (Conditionally Accepted to IEEE VR 2026)
Published in The 33rd IEEE Conference on Virtual Reality and 3D User Interfaces (VR ’26), 2025
Recommended citation: Ritik Vatsal, Xincheng Huang, and Robert Xiao. 2025. In 33rd IEEE Conference on Virtual Reality and 3D User Interfaces (VR ’26). (To Appear) https://xincheng.me/publications/look2react
Abstract: A central promise of virtual reality (VR) games is the increased control players have over their character through pose and body language. However, many non-player character (NPC) systems fail to respond convincingly to these poses, user intent, and situational context, limiting immersion. We present Look2React, an interaction system that captures what NPCs see, using a vision-based reasoning model to select pose and text responses. Look2React endows NPCs with the ability to react dynamically and appropriately to player interactions. Through a gaze and proximity-based detection system inspired by stealth games, we trigger our system intuitively and only when intended, while also reducing resource costs. We invited 20 participants to play two versions of an RPG game: one with NPCs based on contemporary games and the other with Look2React NPCs. Our results demonstrate that Look2React increases engagement, leading to more frequent and repeated interactions with NPCs. Participants reported more satisfactory play sessions, significantly increased feelings of social presence, and felt that the dynamic reactions gave the NPCs more depth and personality—ultimately making them feel more human.
Preprint: DOWNLOAD PDF
