Virginia Tech® home

MANDA: On Adversarial Example Detection for Network Intrusion Detection System

Paper Details

Abstract

Inputting a small deviation, or Adversarial Example (AE), into a data record can cause a Machine Learning (ML) system to make incorrect predictions. 

Researchers launched state-of-the-art attacks against network intrusion-detection systems (IDSs) and found that AE attacks also applied to ML-based IDS models; the attack success rate was as high as 95 percent. 

To counter such attacks, researchers proposed MANDA, an adversarial sample detector that did not require knowledge of attack strategy and didn’t change the ML model. (The name comes from MANifold and Decision boundary-based AE detection system.)

MANDA was placed before the IDS model and filtered adversarial samples from clean data. Researchers’ detection results were as high as a 98 percent true-positive rate with less than 5 percent false-positive rate, protecting ML-based network intrusion detection systems against the advanced AE attacks.

The system can link to any off-the-shelf ML model in various application scenarios, including but not limited to IDSs.