Virginia Tech® home

Knowledge-Enhanced Threat Detection With Large Language Models

Researchers from Virginia Tech, University of Virginia

Researchers aim to develop techniques for intelligent, knowledge-enhanced, and context-aware cyber-threat detection by harnessing large language models (LLMs) to enable deep cyber reasoning, intelligent decision-making, and the incorporation of external security intelligence into defense strategies.

Funded by the CCI Hub

Rationale

Advanced Persistent Threats (APTs) complicate cyber-threat protection, causing numerous massive data breaches. Current solutions struggle against APTs due to their sophisticated multi-step nature. 

Threat-detection methods generate an overwhelming volume of alerts, leading to alert fatigue and high false positives. 

Existing defenses do not fully utilize the rich threat knowledge in cyber-threat intelligence reports, making it hard to keep up with the fast-evolving threat landscape. 

Projected Outcomes

LLMs have shifted the paradigm from fine-tuning models on task-specific datasets to prompt-based learning by LLMs, having them respond to natural language instructions to tailor their responses, enabling them to adapt to a wide range of tasks with minimal reliance on annotated data. 

Researchers will harness the capabilities of emerging LLMs, to enable deep cyber reasoning and intelligent decision-making, as well as to facilitate seamless incorporation of external security intelligence into defense strategies.