Virginia Tech® home

SentimentVoice tells immigrant stories through Virtual Reality

SentimentVoice, a live virtual reality (VR) performance art project, uses  emotion-tracking artificial intelligence (AI) technology to share and spread immigrant populations’ stories. 

virtual reality image of a computer room
A virtual reality image of a computer room. Created by Matthew Labella

The process of emotion tracking is displayed to demonstrate how speech and facial expressions are detected without our awareness, highlighting cybersecurity issues. 

Emotion AI technology, which is typically used for surveillance or commercial purposes, is transformed into an empathetic mediator for active listening. Human voice and movement become the core material for performance art, which is detected and analyzed by emotion AI, using face tracking, Chat GPT, voice analysis, and speech-to-text conversion. 

SentimentVoice began with an oral history process, gathering stories from immigrants in the Richmond community. The stories provided the basis for a script for actors to follow. During a live performance, one actor wears a VR headset while the other doesn't, establishing an intricate connection between virtual and actual space.

As the actors navigate the five VR locations and tell the associated stories, Emotion AI responds to their speech with visual (particles, textures, lights, etc.) and sound acknowledgement within the VR environment. 

Project Investigators

Principal investigator (PI): Semi Ryu, professor, Virginia Commonwealth University (VCU), School of the ArtsDepartment of Kinetic Imaging 

Co-PI: Alberto Cano, associate professor, VCU Department of Computer Science