Deepfake Detection by Leveraging Conditional Generative Adversarial Networks with Uncertainty Quantification
Researchers from Virginia Commonwealth University, Old Dominion University
Researchers will introduce an Uncertainty-Aware Deepfake Detection Framework, integrating Generative Adversarial Networks (GANs), AutoEncoder (cAE), and Bayesian Neural Networks (BNN), addressing the challenges of Out-of-Distribution (OoD) and False Positive (FP) detection.
Funded by the CCI Hub
Project Investigators
- Principal Investigator (PI): Yuichi Motai, Virginia Commonwealth University Department of Electrical & Computer Engineering
- Co-PI: Simegnew Yihunie Alaba, Virginia Commonwealth University Department of Electrical & Computer Engineering
- Co-PI: Mohammad GhasemiGol, Old Dominion University School of Cybersecurity
Rationale
Deepfake technology has rapidly emerged as a destructive tool for creating compelling synthetic images or videos that blur the line between reality and fiction.
This digitally manipulated content, generated using advanced machine learning algorithms, has the potential to spread false information, deceive individuals, and damage reputations.
Such manipulated content has the power to incite public outrage, influence elections, and sow discord among communities.
Projected Outcomes
Researchers will use artificial intelligence/machine learning (AI/ML) approaches to monitor and mitigate potential deepfake threats by interpreting and visualizing which part of images cause uncertainty
The proposed AI/ML framework will be made available in various cybersecurity applications for community stakeholders, such as researchers in industries and federal labs.
Researchers will seek external support for the project's next phase.