Virginia Tech® home

Trust AI: Enhancing Human Confidence and Trust in Deep Learning Models

Researchers' goal is to strengthen the next generation of deep learning defense methods to bolster confidence in their trustworthiness and reliability from both human and algorithmic perspectives. 

Project funded by: CCI Hub


Project Investigators

Lead Principal Investigator (PI): Rui Ning, assistant professor, Old Dominion University Department of Computer Science 

PI: Xinwei Deng, professor, Virginia Tech Department of Statistics 

Co-PI: Xiao Yang, Assistant Professor, Old Dominion University Department of Psychology 

Co-PI: Lusi Li, Assistant Professor, Old Dominion University Department of Computer Science 

Rationale and Background

Machine learning (ML), especially deep learning (DL), is becoming prevalent and pervasive, as it improves efficiency, creates jobs, and strengthens the economy.

This makes DL an increasingly attractive target for cybercriminals who seek out vulnerabilities.

Prior research has shown a wide range of DL algorithms are vulnerable to polluted data, adversarial inputs, and mimicry, evasion, poisoning, and Trojan attacks, with resulting DL models embedded with neural backdoors.

A backdoor model behaves normally with clean inputs, but whenever a trigger is presented, the input will be misclassified into a target category.

This project will conduct research at the intersection between AI assurance and security and COVA CCI’s concentration on the defense industry.

Methodology

The research will include a systematic study to design effective defenses against backdoor attacks against DL models. This will include: 

  • Development of human-assisted back-door defenses.  The team will work towards a robust Perception-Reasoning pipeline, which connects DL models as the perception component and employs graphical models to embed human knowledge expressed as logic rules as the reasoning component.
  • A plan to derive dual-certification for DL model trustworthiness. The team will design efficient methods to certify the robustness of DL models from both human and algorithm perspectives. 
  • Experiments and prototyping. The team will conduct extensive comparative performance analysis experiments to ensure proposed approaches’ outperformance.  

Projected Outcomes

  • Make DL models more robust by using human knowledge.
  • Provide rigorous certification (from the perspective of both humans and algorithms) to bolster confidence in DL models. 
  • Offer researchers an easier path to additional advancement of DL security and privacy research. 

Additional impact:

  • Community engagement and workforce development. The project will contribute to fostering and sustaining the AI security and privacy research community, boosting commercialization and workforce development. 
  • Curriculum enrichment and participation of underrepresented groups. The PIs will develop plans to inspire the participation of underrepresented groups, and the prototype will be leveraged as an open-source learning platform for curriculum development and workforce training