Toward Integrated Security and Privacy Solutions for Multi-Modal AI
Researchers from Old Dominion University, William & Mary
Researchers will conduct a comprehensive investigation into the complex interplay between security and privacy in multi- modal AI systems, including systematically examining the interdependencies between these issues and understanding the mechanisms by which enhancements in one area affect the other.
Funded by the CCI Coastal Virginia Node
Project Investigators
- Principal Investigator (PI): Daniel Takabi, Old Dominion University School of Cybersecurity
- Co-PI: Rui Ning, Old Dominion University Department of Computer Science
- Co-PI: Lusi Li, Old Dominion University Department of Computer Science
- Co-PI: Yixuan (Janice) Zhang, William & Mary Department of Computer Science
Rationale
Security and privacy issues of artificial intelligence (AI) systems have traditionally been treated as separate concerns, each addressed through different techniques.
However, recent research shows an interdependence between these two issues: efforts to enhance security can inadvertently compromise privacy, and vice versa. This interplay becomes even more critical with the advent of multi-modal AI systems.
In these complex systems, interdependencies between different data modalities can exacerbate trade-offs between security and privacy, amplifying vulnerabilities in both areas.
Projected Outcomes
Researchers will explore innovative countermeasures and solutions that provide trade-offs between security and privacy to develop a balanced and integrated framework that cohesively addresses both security and privacy concerns.
This will enable the creation of AI systems that:
- Are robust against external threats.
- Are capable of safeguarding user privacy.
- Accelerate the safe and responsible deployment of AI technologies in critical applications.
- Mitigate risks.
- Improve trust in AI systems in various sectors.