Explainable AI for Medical Image Analysis: A Comparative Study of Post Hoc and Model-Based Explainability Techniques

dc.contributor.authorLATRACH, Mohamed Ali
dc.contributor.authorSASSENE ,Abderraouf
dc.contributor.authorLahsasna , Adel
dc.date.accessioned2025-11-05T09:26:23Z
dc.date.available2025-11-05T09:26:23Z
dc.date.issued2025
dc.description.abstractExplainable artificial intelligence (XAI) is critical for building trust and ensuring safe deployment of deep learning models in healthcare. This thesis presents a comparative study of two XAI approaches—Grad-CAM (a post hoc method) and ProtoPNet (a model-based method)—applied to multilabel chest X-ray interpretation. Both models were trained and evaluated on the VinDr-CXR dataset under identical conditions. The Grad-CAM approach, built on an EfficientNetV2-S backbone, achieved superior predictive performance (macro ROC AUC = 0.86, macro F1 = 0.72) and generated clear, reliable heatmaps with minimal computational overhead (hit-rate = 64%, mIoU = 42%). In contrast, ProtoPNet, which learns prototypical image patches for inherently interpretable “this looks like that” explanations, produced lower classification metrics (macro ROC AUC = 0.73, macro F1 = 0.52) and weaker localization performance (hit-rate = 0.7%, mIoU = 42%) while incurring approximately 25 % more inference time. Despite these drawbacks, ProtoPNet’s case-based explanations more closely align with clinical reasoning, offering tangible examples that radiologists find meaningful. Our findings indicate that, for rapid deployment and high accuracy, post hoc methods like Grad-CAM are preferable. However, the richer, example-driven explanations of ProtoPNet highlight the need to further refine prototypebased models—by optimizing prototype selection and expanding datasets—so that they can deliver both strong performance and intuitively interpretable results in real-world clinical settings.
dc.identifier.urihttp://dspace.univ-skikda.dz:4000/handle/123456789/5328
dc.language.isoen
dc.publisherFaculty of Science
dc.titleExplainable AI for Medical Image Analysis: A Comparative Study of Post Hoc and Model-Based Explainability Techniques
dc.title.alternativeArtificial Intelligence
dc.typeMasters degree thesis
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
M-006-00023-1.pdf
Size:
6.07 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description:
Collections