Piloting a Human-AI Cooperative System to Detect Automated Deepfake Deception
Dr. Giuseppe Ateniese
KEY INTERESTS
Cloud security; Cybersecurity; Applied cryptography
AFFILIATIONS/APPOINTMENTS
Professor and Eminent Scholar, Department of Cyber Security Engineering and Department of Computer Science, George Mason University
Faculty Fellow, Commonwealth Cyber Initiative
ACADEMIC DEGREES
Laurea (MSc), Computer Science, University of Salerno
PhD, Computer Science, University of Genoa
PILOTING A HUMAN-AI COOPERATIVE SYSTEM TO DETECT AUTOMATED DEEPFAKE DECEPTION
In June 2022, the mayors of several European cities were deceived into holding video conference calls with a deepfake version of Mayor Vitali Klitschko, their counterpart in Kyiv, Ukraine. Although this imposter was eventually discovered, this attempt reveals the potential national security risks posed by cyber threat actors using deepfake technology to facilitate their attacks. In this case, there appeared to be a human controlling the deepfake, but it is now (or soon will be) possible for automated ‘bots’ to control such deepfakes, allowing threat actors to launch targeted attacks at scale. Synthetic media (the automated generation of text, audio, and video) presents a game-changing capability to cyber threat actors and represents a clear and present danger to national security. This project seeks to begin development of a new method to detect automation-enabled social engineering attacks, in near real-time, using a hybrid human-AI system.