Virginia Tech® home

Principled and Automated Approach for Investigating AR/VR Attacks

Research Paper Showcase 2025

Abstract

As Augmented and Virtual Reality (AR/VR) adoption grows across sectors, auditing systems are needed to enable provenance analysis of AR/VR attacks. However, traditional auditing systems often generate inaccurate and incomplete provenance graphs or fail to work due to operational restrictions in AR/VR devices.

This paper presents REALITYCHECK, a provenance-based auditing system designed to support accurate root cause analysis and impact assessments of complex AR/VR attacks. Our system first enhances the W3C PROV data model with additional ontology to capture AR/VR-specific entities and causal relationships. Then, we employ a novel adaptation of natural language processing and feature-based log correlation techniques to transparently extract entities and relationships from dispersed, unstructured AR/VR logs into provenance graphs.

Finally, we introduce an AR/VR-aware execution partitioning technique to filter out forensically irrelevant data and false causal relationships from these provenance graphs, improving analysis accuracy and investigation speed. We built a REALITYCHECK prototype for Meta Quest 2 and evaluated it against 25 real-world AR/VR attacks. The results show that REALITYCHECK generates accurate provenance graphs for all AR/VR attacks and incurs low runtime overhead across benchmarked applications.

Notably, our execution partitioning approach drastically reduces the size of the graph without sacrificing essential investigation details. Our system operates non-intrusively, requires no additional installation, and is generalizable across various AR/VR devices.


Authors

  • Muhammad Shoaib, University of Virginia
  • Alex Suh, University of Virginia
  • Wajih Ul Hassan, University of Virginia

Publication

  • Venue: USENIX Security 2025
  • Date: 1/23/2025

Related Papers

CTINEXUS: Automatic Cyber Threat Intelligence Knowledge Graph Construction Using Large Language Models

An Exploratory Mixed-methods Study on General Data Protection Regulation (GDPR) Compliance in Open-Source Software

Security Enhancement in UAV Swarms: A Case Study Using Federated Learning and SHAP Analysis

Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction

S2M3: Split-and-Share Multi-Modal Models for Distributed Multi-Task Inference on the Edge

"This is not a scam!": Assessment of an awareness raising program tackling older adults' scam victimization in a multi-method study

Unraveling the Complexities of MTA-STS Deployment and Management in Securing Email