Explainable AI for Medical Image Analysis: A Comparative Study of Post Hoc and Model-Based Explainability Techniques
Loading...
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
Faculty of Science
Abstract
Explainable artificial intelligence (XAI) is critical for building trust and ensuring safe deployment
of deep learning models in healthcare. This thesis presents a comparative study
of two XAI approaches—Grad-CAM (a post hoc method) and ProtoPNet (a model-based
method)—applied to multilabel chest X-ray interpretation. Both models were trained and
evaluated on the VinDr-CXR dataset under identical conditions. The Grad-CAM approach,
built on an EfficientNetV2-S backbone, achieved superior predictive performance
(macro ROC AUC = 0.86, macro F1 = 0.72) and generated clear, reliable heatmaps with
minimal computational overhead (hit-rate = 64%, mIoU = 42%). In contrast, ProtoPNet,
which learns prototypical image patches for inherently interpretable “this looks like that”
explanations, produced lower classification metrics (macro ROC AUC = 0.73, macro F1
= 0.52) and weaker localization performance (hit-rate = 0.7%, mIoU = 42%) while incurring
approximately 25 % more inference time. Despite these drawbacks, ProtoPNet’s
case-based explanations more closely align with clinical reasoning, offering tangible examples
that radiologists find meaningful. Our findings indicate that, for rapid deployment
and high accuracy, post hoc methods like Grad-CAM are preferable. However, the richer,
example-driven explanations of ProtoPNet highlight the need to further refine prototypebased
models—by optimizing prototype selection and expanding datasets—so that they
can deliver both strong performance and intuitively interpretable results in real-world
clinical settings.