Virginia Tech® home

Assuring Trustworthy and Secure Human-AI Collaboration to Strengthen U.S. Civil Infrastructure

Researchers seek to understand human-artificial intelligence (AI) collaboration in bridge maintenance through the lens of AI assurance, specifically security, explainability, and trustworthiness, and how they’re interconnected.

Project funded by: CCI Hub


Rationale and Background

The bridge inspection industry is expected to quadruple in value from 2020 to 2029 ($1.6 billion to $6.3 billion), in large part due to investment in autonomous technologies such as drone inspection, wireless sensor networks (WSNs), and monitoring enabled by the Internet of Things (IoT).

The research community has not addressed the cybersecurity risks in this transition to digital bridge maintenance, particularly with respect to the collaboration between humans and AI decision support. 

Corrupting inspection data, algorithms, and records can lead to inappropriate maintenance decisions or shutdowns of vital transportation routes. 

Given that state and federal bridge maintenance is already backlogged, data and algorithm breaches have the potential to completely destabilize critical bridge maintenance operations and paralyze large portions of the transportation network.

Methodology

The team will leverage a virtual reality environment to test different models of human-AI collaboration in the presence of both physical and cybersecurity risks. 

Measures will include quantitative metrics of performance and qualitative analyses of users’ trust and confidence in the technology. Targeted objectives include:

  • Security: Understanding how different levels of collaborative AI strategies support a human’s ability to identify (or miss) data and algorithmic security breaches.
  • Explainability: Understanding how different levels of collaborative AI strategies support a human’s ability to articulate the strengths/weaknesses of an algorithm along with possible deficiencies in training examples.
  • Trustworthiness: Understanding how different levels of collaborative AI strategies lead to increased or diminished trust in the technology.
  • Interdependence: Understanding the correlation between security, explainability, and trustworthiness.

Projected Outcomes

  • VR interface application: The team’s VR interface code will be published online in an open-source repository to support additional experiments and research. The code will be converted to an Oculus Quest 2 application, which casual users can easily download and run.
  • Demonstrations: The VR interface will be presented as a demonstration in various venues to promote the research and open-source research materials.
  • Assured AI models: The AI models will have components that are generalizable. Novel explainability methods will be open-sourced to facilitate research advancement.
  • Publications: Preliminary results will be presented at the 2023 International Conference on Artificial Intelligence Testing, while final results will be summarized in a journal paper and submitted to the journals Engineering Applications of Artificial Intelligence or Artificial Intelligence Review.