Presented at ISCAP 2025 in Louisville, KY. Currently under review for publication at the Journal of Information Systems Education (JISE).
This study addresses a key challenge in cybersecurity training: the black-box nature of AI systems. While AI-powered phishing detection tools are increasingly common, their lack of transparency limits both trust and educational value. To bridge this gap, I designed and evaluated a Human-Centered Explainable AI (HC-XAI) dashboard that integrates three complementary explanation modalities:
- Rule-based logic (traditional phishing indicators such as suspicious URLs and urgency keywords)
- Natural language explanations (concise rationales generated with large language models)
- Visual heatmaps (token-level attention highlighting suspicious words/phrases)
Following the principles of Design Science Research (DSR), the artifact was iteratively developed and tested. The evaluation included:
- Expert heuristic walkthroughs, which identified usability and clarity issues.
- A mixed-methods user study with 23 cybersecurity students, which measured trust, comprehension, and cognitive effort across the three modalities.
Findings:
- Natural language explanations were rated as the clearest and most confidence-boosting.
- Rule-based explanations provided transparency but sometimes felt too technical.
- Heatmaps were visually engaging but required scaffolding to reduce cognitive load.
- Multimodal explanations together improved student trust and understanding by allowing cross-validation.
Contribution:
This research demonstrates how explainable AI can be tailored for educational settings, not just professional analyst environments. The HC-XAI dashboard serves as both a teaching tool and a proof-of-concept for integrating multimodal XAI into cybersecurity curricula, preparing students for AI-augmented workplaces.
Artifacts, screenshots, and setup instructions will be available soon on this site for instructors and researchers interested in adopting or extending the dashboard.