Backdoor Detection and Mitigation in Deep Neural Networks
Coastal Virginia Node
Principal Investigator:
Hongyi Wu, Batten Chair of Cybersecurity and the director of the Center for Cybersecurity Education and Research, Old Dominion University
Co-Principal Investigators:
Jin-Hee Cho, associate professor, computer science, Virginia Tech; Leigh Armistead, president, Peregrine Technical Solutions; Dave Wolfe, vice president, Peregrine Technical Solutions; Tom Murphy, Peregrine Technical Solutions; Chunsheng Xin, professor, electrical and computer engineering, ODU; Jiang Li, professor, electrical and computer engineering, ODU; Rui Ning, research assistant professor, cybersecurity, ODU
Project Description:
Artificial Intelligence (AI) is becoming a critical part of modern warfare, with its game-changing capability for handling large volumes of data and making collaborative complex decisions in support of self-control, self-regulation, and self-actuation of combat systems. While it is being adopted as an effective tool for automation where, relative to human intelligence, AI is faster, more agile, or low-cost, it is becoming an increasingly attractive target, opening vulnerabilities and enabling new forms of cyberattacks. In particular, a wide range of deep learning algorithms are vulnerable to polluted data, adversarial inputs, mimicry attacks, evasion attacks, and poisoning attacks. The resulting deep learning models are embedded with neural backdoors, that can cause catastrophic failure. The overarching goals of this proposed project include: (1) investigating existing and emerging neural backdoor attacks, (2) exploring and implementing effective backdoor detection schemes, and (3) once detected, developing and testing efficient backdoor mitigation techniques. Researchers from two CCI nodes (Coastal and Southwest) will work with defense industry and Navy/Army collaborators to develop the proposed backdoor detection and mitigation system, contributing significantly to the protection of future intelligent combat systems. The project will also develop new training modules in the context of secure and safe AI and offer them via workshops and bootcamps, aiming to prepare students and practitioners with advanced secure AI skills to succeed in cutting-edge research and industrial projects. Overall, the proposed work will lead to enabling technologies to secure AI systems, accelerating their development and widening their adoption in various application domains.