Calibrating Trust in Human-Machine Interactions with Algorithm Transparency
Dr. Ziyu Yao
KEY INTERESTS
Natural language processing; Artificial intelligence; Human-AI interaction; Language and code semantics; Efficient machine learning
AFFILIATIONS/APPOINTMENTS
Assistant Professor, Department of Computer Science, George Mason University
ACADEMIC DEGREES
BE, Communication Engineering, Beijing University of Posts and Telecommunications
PhD, Computer Science and Engineering, Ohio State University
CALIBRATING TRUST IN HUMAN-MACHINE INTERACTIONS WITH ALGORITHM TRANSPARENCY
AI systems ("machines") gave been increasingly used to facilitate human life, such as assisting people in complex daily tasks and boosting their productivity in the workplace. Most commonly-seen interactions between humans and machines are in one shot, i.e., humans send a task request and the machines complete the task by taking decisions and responding with the decision outputs. however, such one-shot task completion, with no follow-up human-machine interactions (e.g., humans validating the machine decisions), has the potential to expose humans to insecure situations. Building upon previous fundamental research which built a systematic framework for secure human-machine interactions through AI assurance and security, this project seeks to improve upon it by addressing the human component. To achieve this, the research will work to gain a deeper understanding of the "trust" in secure human-machine interactions, specifically investigating the impact of algorithm transparency on human trust and final-task performance.