Virginia Tech® home

SentimentVoice: Integrating Emotion AI and VR in Performing Arts

Researchers will use SentimentVoice, a live virtual reality-based performance art project, to explore how threat actors can surveil emotion data of personal speech in chat, text, and social media. This will demonstrate and promote public awareness of  the vulnerability and insecurity of emotion data. 

Project funded by the CCI Hub


Project Investigators

Principal investigator (PI): Semi Ryu, Professor, Virginia Commonwealth University (VCU), School of the Arts, Department of Kinetic Imaging 

Co-PI: Alberto Cano, Associate Professor, VCU Department of Computer Science

Rationale and Background

Emotion-tracking Artificial Intelligence (AI) such as lie detectors and facial recognition has been used to create a controlled and monitored environment. However, emotion data can also be used to create empathy, foster deep connections, and encourage mutual understanding. 

The project will bring awareness to the emotion-tracking system for daily conversations in social media and digital community platforms, and promote cybersecurity research to protect privacy not only for factual information, but also for the emotional characteristic and history of individuals. 

Methodology

Researchers will use SentimentVoice to spotlight the vulnerabilities and risks associated with adversarial attacks from the environment to advance the understanding of the human-VR safe interaction.

SentimentVoice uses emotion-tracking Artificial Intelligence (AI) technology for live continuous speech recognition. It explores the human voice and speech as performative art materials to produce emotion that will be detected and analyzed by AI and can be used for either empathy or surveillance.

During a live event, a performer will tell stories of underrepresented populations utilizing a dramatic emotional flow while wearing a Virtual Reality (VR) headset. The performer’s speech will be monitored, analyzed for emotion recognition, and used to activate audio-visual elements in the VR environment. 

Researchers aim to:

  • Visualize the emotion-tracking process clearly for the viewer. 
  • Develop emotion AI deep-learning models and use them to trigger audio visual performance in VR. 
  • Conduct technical exploration of emotion AI through emotion recognition from voice texture and sentiment analysis from textual representation of the speech. 
  • Analyze the robustness and resilience of emotion AI models to adversarial attacks.
  • Process big data to find related images, links, ads, based on detected emotions
  • Explore a new form of VR speech-based performance art.

Projected Outcomes

Researchers will bring awareness to the emotion-tracking-and-monitoring process of digital conversations such as chat and speech by developing and presenting:

  • An emotion-tracking AI system, based on human voice and speech contents from live continuous speech. 
  • Create and test deep-learning models. 
  • Present a live performance at the CCI exhibit in a Virginia venue. 
  • Record video documentation of the performance. 
  • Set up VR interactive contents for people to participate with their own speech. 
  • Disseminate results, including presentation of the project in public exhibitions.

The process will also be presented in engineering, new media, electronic arts and VR conferences, such as ACM, IEEE, ISEA (International Symposium of Electronic Arts), Siggraph XR, CAA (College Art Association), A2RU (The Alliance for the Arts in Research Universities), Ars Electronica, International Conference on Machine Learning (ICML), and the International Conference in Computer Science (ICCS). 

SentimentVoice Credits

  • Creative Director: Semi Ryu
  • Scientific Director: Alberto Cano
  • AI Engineer/VR Developers: Miles Popiela, Henry Bryant
  • 3D Modeler and Texture Artist: Matthew Labella
  • Video Editor: Hanna Chou
  • Sound Artist: Chrystine Rayburn 
  • Actors: Ryan Flores, Katherine Nguyen
  • Script Writers: Hanna Chou, Katherine Nguyen, Ryan Flores
  • Videographers: Hanna Chou, Kiara Brown, Uday Illa, Ryan Alvarado
  • Interviewers: Hanna Chou, Richmond Animation Archive, Menna Hassanain 

Special thanks to Noren Gelberg-Hagmaier, Ariana Thomas, Josiah Wilson, VCU AAPIA

Funded by Commonwealth Cyber Initiative