Virginia Tech® home

Securing the Machine Learning Components of Autonomous Systems: Risk Assessment and Mitigation

Researchers aim to enhance autonomous vehicle (AV) resilience by understanding the end-to-end effect of faults in the operation of AVs, then proposing practical mitigation techniques to harden the AV operation against faults.

Funded by the CCI Hub


Rationale and Background

Autonomous systems, such as AVs, robots, and drones, are becoming indispensable parts of supply chains for shipping and delivery.

With the growing complexity of software and use of deep neural networks (DNN),  for perception and control, many factors can threaten the safe operation of complex software-intensive cyber-physical systems (CPS), such as:

  • Accidental and malicious perturbations on sensor data.
  • Software bugs.
  • Transient faults in hardware. 

Recent works have also shown the vulnerability of machine learning (ML)-based perception systems in AVs to adversarial attacks. 

ML systems on AVs rely on specialized hardware such as GPUs that pack a large number of computational units to provide magnitudes of higher throughput. 

In high-performance computing environments, GPUs can be susceptible to transient faults. In the DNN domain, such faults threaten software functionality. 

Methodology

Researchers will address challenges by:

  • Evaluating the resilience of AVs’ ML components. Researchers will develop a software fault injection methodology that will evaluate the resilience of AV software. 
  • Developing  mitigation techniques to boost ML resilience of DNNs used in AVs. Researchers will develop techniques to fortify constructs of ML components in the presence of hardware faults.
Projected Outcomes

Researchers want to improve the resilience of DNN components of AVs and guide the design of mechanisms for timely detection and mitigation of hazards to prevent catastrophic accidents.