Virginia Tech™home

AI Assurance

As artificial intelligence (AI) research accelerates, machine capabilities are quickly expanding so that complex AI systems are becoming a part of everyday life. It is imperative that trust and assurance mechanisms are baked into the development and deployment process. AI systems must be deemed reliable, explainable, unbiased/fair, and privacy-preserving to be truly accepted by humans and fully integrated into society.

CCI’s AI Assurance team is a multi-institution and interdisciplinary collaborative research effort to advance foundational knowledge in trust and assurance of AI systems, leveraging use-inspired research questions presented by partners in govt/industry. We are building the research infrastructure and future workforce needed to meet challenges and opportunities.

Our research focuses on the convergences of disciplines, from AI/ML, cyber security, and engineering, to human factors, and policy. Our team and collaborators combine their deep expertise in these areas to understand the underlying concepts and principles from the machine and human perspective to enable trust and assurance in future AI systems.

AI/ML Engineering Human Factors Policy Security
modeling and simulation
systems engineering human-centric methods AI governance
computer / network security
image processing wireless networks cognitivei engineering ethics of AI
wireless security
data analytics power /smart grid social sciences risk management
cyber physical systems
biomedical informatics energy management ergonomics AI adoption policy
social network security
visual data mining computer vision trust mechanisms   formal verification
statistics systems biology     nuclear security

AI systems are essentially computer algorithms that perform simple tasks that mirror human intelligence. They can recognize patterns, like faces in an image, or trends in the stock market. They can make connections between disparate information sources and perform automated tasks, like tying your license plate number taken from an image of your car to your EZPass account to automatically debit a toll charge.

As these AI systems begin to approach or even exceed human-level performance on complex tasks, the underlying algorithms are increasingly becoming a “black box.” Machine Learning (ML) methods like deep learning have shown amazing predictive capability, but when the algorithms make mistakes, we often cannot explain them nor find ways to correct them. This is problematic, especially when AI systems are making critical safety decisions, like when a self-driving car makes an unexplainable unsafe driving decision. It also creates an entirely new frontier of vulnerability to hackers who target AI algorithms directly to cause unexpected behavior, rather than more traditional computer systems, networks, and data.

Traditional approaches to software assurance won’t work on AI algorithms. The underlying code of these “black boxes" can change in real time making it impossible to be reviewed in traditional ways. In addition, they are vulnerable to slight deviations in input data and might embed policies and tradeoffs in ways that can’t be observed and changed.

Even if an AI algorithm is functionally perfect, user distrust and misunderstanding can undermine its performance. AI Assurance requires that we account for all of these challenges in certifying how confident we are that ML and/or AI algorithms function as intended and are free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the data/algorithm.

Group meeting photo.

The CCI AI Assurance team is building a testbed and software factory that will accelerate research, support training, and lead to new innovations. Research projects will include methods for quantifying confidence in ML and/or AI algorithms ensuring they function as intended and perform reasonably under a variety of circumstances and contexts. Researchers will investigate evaluation metrics and test designs for AI, test and evaluation of algorithm transparency and explainability, and threat portrayal for ML/AI algorithms. The testbed will enable improvements in designing and testing robust, verifiable, and unbiased ML/AI algorithms.