Virginia Tech® home

Piloting a Human-AI Cooperative System to Detect Automated Deepfake Deception

Researchers will begin development of a new method to detect automation-enabled social-engineering attacks, in near real-time, using a hybrid human-artificial intelligence system.

Funded by: CCI Northern Virginia Node

Rationale and Background

Synthetic media (the automated generation of text, audio, and video) presents a game-changing capability to cyber threat actors and represents a clear and present danger to national security.

In June 2022, the mayors of several European cities were deceived into holding video conference calls with a deepfake version of Mayor Vitali Klitschko, their counterpart in Kyiv, Ukraine. 

Although this imposter was eventually discovered, this incident reveals potential national security risks posed by cyber threat actors using deepfake technology to facilitate attacks. 

In this case, there appeared to be a human controlling the deepfake, but it is now (or soon will be) possible for automated bots to control such deepfakes, allowing threat actors to launch targeted attacks at scale. 


This project will require two approaches: one for a human operator and one for an automated assistant. They  will operate in parallel, then eventually merge.

  • The human operator will pilot and validate methodologies to investigate human capabilities for deepfake detection. 
  • The automated assistant plan will create an ensemble of machine learning detectors of deepfakes by combining video, audio, text, and other modalities. 
  • In the combined phase, researchers will integrate the human subject with an AI assistant to create a proof-of-concept detection system. 

Techniques for discovering synthetic media-enabled deception could include:

  • Asking questions that an imposter is unlikely to answer accurately.
  • Eliciting behaviors that might reveal artifacts in the communications medium (audio or video distortions, for example).
  • Inducing unhuman-like responses.

Projected Outcomes

Among other goals, the project seeks to mitigate dangers to national security. If the project is successful:

  • Research will provide a significant advantage to those targeted by threat actors using synthetic media to enhance social-engineering attacks and influence operations.
  • The team will have created a human-AI system that is more capable of detecting automation-driven deepfake attacks than any existing system.
  • Trained and prompted users will be able to guide interactions into extremes that will increase probability of discovery.