Virginia Tech® home

CCI researchers write book on trusting artificial intelligence

February 14, 2023

Volume focuses on gaining confidence in AI

Artificial intelligence is everywhere today, from online shopping to influencing how policies and laws are crafted, but how do we know to trust it? CCI researchers have written a first-of-its-kind book to offer answers. 

“AI Assurance: Towards Trustworthy, Explainable, Safe and Ethical AI” is written and edited by Feras A. Batarseh, associate professor in the Department of Biological Systems Engineering (BSE) at Virginia Tech, and Laura J. Freeman, deputy director of the Virginia Tech National Security Institute. Multiple authors from academia, including five Virginia universities, as well as industry and the U.S. government, also contributed.

“AI assurance plays a major role in AI and engineering research, and this book serves as a guide to researchers, scientists, and students in their studies and experimentation with AI,” explained Batarseh. “AI also is part of government and policymaking discussions, so policymakers will find the book helpful.”

Batarseh breaks down how each audience––AI researcher, AI practitioner, and policymaker––could apply the book to their work: 

AI Researchers

One of the book’s goals is to provide AI researchers with a foundation to understanding the conceptual, statistical, and theoretical challenges of AI assurance. 

There are three literature review studies for bias and fairness, explainable AI, and outlier detection to establish the state-of-the-science in AI assurance-related dimensions. A method section features novel approaches to the assurance of AI systems of all kinds,such as  causation, coordination, inference, and data-management methods.

AI Practitioners

Practitioners explore empirical studies on assumptions that influence algorithmic accountability and how they play out in practice. Authors highlight factors that AI engineers negotiate when implementing AI, primarily in  economics, health care, engineering, agriculture, and technology policy. The book also presents AI assurance best practices, both theoretical and statistical. 

Policymakers

This book will help researchers  identify potential methods for evaluating the use of AI algorithms at government in a liable manner. Methods related to AI for public policy provide measures that can increase trust in AI systems and mitigate potential algorithmic harms through assurance. One section offers  examples of evidence-based policymaking. Forewords in each section provide testimonies by executives trying to deploy AI in the public sector.

General Audiences

In addition to being an asset for researchers, general readers will also find the book accessible. “If you are a non-technical AI enthusiast, we recommend that you begin by reading chapters 1, 2, 5, and 6, before digging deeper into the inner workings of AI methods presented in other chapters,” Batarseh said.

AI Assurance book cover

AI Assurance: Towards Trustworthy, Explainable , Safe and Ethical AI by Feras Batarseh and Laura Freeman
Book Cover
Portrait of Feras Batarseh
Batarseh
Portrait of Laura Freeman
Freeman
"AI assurance is an octopus in a sea of data: it is required to be intelligent, adaptive, and accessible to all parts of its ecosystem. Octopuses are highly agile and intelligent carnivores; they can store long and short term memory information, can quickly learn from shapes and patterns of sea objects, have been reported to practice observational learning, and are known for building shelters for protective measures against adversaries."

— Nuwer, R., "An Octopus Could Be the Next Model Organism", Scientific American, March 2021.