SHAP-Based Explainability for Local and Global Insights in Alzheimer's Detection

Authors

  • Shraddha Khanapur Department of CSE, B.M.S. College of Engineering, India
  • Jyothi S. Nayak Department of CSE, B.M.S. College of Engineering, India
  • B. S. Rajeshwari Department of CSE, B.M.S. College of Engineering, India
  • M. Namratha Department of CSE, B.M.S. College of Engineering, India
  • Chirag B. Bharadwaj Department of CSE, B.M.S. College of Engineering, India
  • Raghav Bhardwaj Department of CSE, B.M.S. College of Engineering, India
Volume: 16 | Issue: 1 | Pages: 30940-30947 | February 2026 | https://doi.org/10.48084/etasr.13932

Abstract

Alzheimer's disease is a progressive neurodegenerative disorder that leads to cognitive decline and loss of independence, making early and accurate diagnosis essential. Recent advances in Machine Learning (ML) have enhanced medical image analysis, but the opaque nature of deep learning models limits their adoption in clinical practice. This study introduces SCR NetX, a CNN model based on the VGG 16 architecture, to classify Alzheimer's disease into four stages: non demented, very mild, mild, and moderate dementia. To improve interpretability, the model integrates Explainable AI (XAI) using SHAP (SHapley Additive eXplanations) for both local and global analyses. Local explanations highlight MRI regions that influence individual predictions, aiding in case-specific evaluation, while global explanations reveal the overall behavior of the model. Two segmentation methods—grid-based for broad region analysis and SLIC (Simple Linear Iterative Clustering) for fine-grained superpixel analysis—are employed to ensure precise and clinically interpretable outputs. This framework combines accurate classification with transparent decision-making, bridging the gap between AI-driven diagnostics and practical clinical application.

Keywords:

Alzheimer's disease, convolutional neural networks, medical image, explainable AI, interpretability

Downloads

Download data is not yet available.

References

M. Rana and M. Bhushan, "Machine learning and deep learning approach for medical image analysis: diagnosis to detection," Multimedia Tools and Applications, vol. 82, no. 17, pp. 26731–26769, July 2023. DOI: https://doi.org/10.1007/s11042-022-14305-w

J. Ker, L. Wang, J. Rao, and T. Lim, "Deep Learning Applications in Medical Image Analysis," IEEE Access, vol. 6, pp. 9375–9389, 2018. DOI: https://doi.org/10.1109/ACCESS.2017.2788044

S. K. Zhou et al., "A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies With Progress Highlights, and Future Promises," Proceedings of the IEEE, vol. 109, no. 5, pp. 820–838, Feb. 2021. DOI: https://doi.org/10.1109/JPROC.2021.3054390

M. Tanveer et al., "Machine Learning Techniques for the Diagnosis of Alzheimer’s Disease: A Review," ACM Transactions on Multimedia Computing, Communicactions, and Applications, vol. 16, no. 1s, Dec. 2020, Art. no. 30. DOI: https://doi.org/10.1145/3344998

M. B. Antor et al., "A Comparative Analysis of Machine Learning Algorithms to Predict Alzheimer’s Disease," Journal of Healthcare Engineering, vol. 2021, no. 1, 2021, Art. no. 9917919. DOI: https://doi.org/10.1155/2021/9917919

Y. AbdulAzeem, W. M. Bahgat, and M. Badawy, "A CNN based framework for classification of Alzheimer’s disease," Neural Computing and Applications, vol. 33, no. 16, pp. 10415–10428, Aug. 2021. DOI: https://doi.org/10.1007/s00521-021-05799-w

G. Folego, M. Weiler, R. F. Casseb, R. Pires, and A. Rocha, "Alzheimer’s Disease Detection Through Whole-Brain 3D-CNN MRI," Frontiers in Bioengineering and Biotechnology, vol. 8, Oct. 2020. DOI: https://doi.org/10.3389/fbioe.2020.534592

E. Tjoa and C. Guan, "A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, Aug. 2021. DOI: https://doi.org/10.1109/TNNLS.2020.3027314

M. Frasca, D. La Torre, G. Pravettoni, and I. Cutica, "Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review," Discover Artificial Intelligence, vol. 4, no. 1, Feb. 2024, Art. no. 15. DOI: https://doi.org/10.1007/s44163-024-00114-7

M. Nazar, M. M. Alam, E. Yafi, and M. M. Su’ud, "A Systematic Review of Human–Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques," IEEE Access, vol. 9, pp. 153316–153348, 2021. DOI: https://doi.org/10.1109/ACCESS.2021.3127881

V. Viswan, N. Shaffi, M. Mahmud, K. Subramanian, and F. Hajamohideen, "Explainable Artificial Intelligence in Alzheimer’s Disease Classification: A Systematic Review," Cognitive Computation, vol. 16, no. 1, pp. 1–44, Jan. 2024. DOI: https://doi.org/10.1007/s12559-023-10192-x

V. Vimbi, N. Shaffi, and M. Mahmud, "Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer’s disease detection," Brain Informatics, vol. 11, no. 1, Apr. 2024, Art. no. 10. DOI: https://doi.org/10.1186/s40708-024-00222-1

S. M. Mahim et al., "Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model," IEEE Access, vol. 12, pp. 8390–8412, 2024. DOI: https://doi.org/10.1109/ACCESS.2024.3351809

Y. Zhang, T. Liu, V. Lanfranchi, and P. Yang, "Explainable Tensor Multi-Task Ensemble Learning Based on Brain Structure Variation for Alzheimer’s Disease Dynamic Prediction," IEEE Journal of Translational Engineering in Health and Medicine, vol. 11, pp. 1–12, 2023. DOI: https://doi.org/10.1109/JTEHM.2022.3219775

J. Ye et al., "MAD-Former: A Traceable Interpretability Model for Alzheimer’s Disease Recognition Based on Multi-Patch Attention," IEEE Journal of Biomedical and Health Informatics, vol. 28, no. 6, pp. 3637–3648, June 2024. DOI: https://doi.org/10.1109/JBHI.2024.3368500

L. Brusini et al., "XAI-Based Assessment of the AMURA Model for Detecting Amyloid-β and Tau Microstructural Signatures in Alzheimer’s Disease," IEEE Journal of Translational Engineering in Health and Medicine, vol. 12, pp. 569–579, 2024. DOI: https://doi.org/10.1109/JTEHM.2024.3430035

B. K. Raghupathy, M. R. Reddy, P. Theeda, E. Balasubramanian, R. K. Namachivayam, and M. Ganesan, "Harnessing Explainable Artificial Intelligence (XAI) based SHAPLEY Values and Ensemble Techniques for Accurate Alzheimer’s Disease Diagnosis," Engineering, Technology & Applied Science Research, vol. 15, no. 2, pp. 20743–20747, Apr. 2025. DOI: https://doi.org/10.48084/etasr.9619

S. Alsubai et al., "Transfer deep learning and explainable AI framework for brain tumor and Alzheimer’s detection across multiple datasets," Frontiers in Medicine, vol. 12, June 2025, Art. no. 1618550. DOI: https://doi.org/10.3389/fmed.2025.1618550

S. Kumar and S. Shastri, "Alzheimer MRI Preprocessed Dataset." Mendeley Data, June 25, 2025.

Downloads

How to Cite

[1]
S. Khanapur, J. S. Nayak, B. S. Rajeshwari, M. Namratha, C. B. Bharadwaj, and R. Bhardwaj, “SHAP-Based Explainability for Local and Global Insights in Alzheimer’s Detection”, Eng. Technol. Appl. Sci. Res., vol. 16, no. 1, pp. 30940–30947, Feb. 2026.

Metrics

Abstract Views: 363
PDF Downloads: 172

Metrics Information