A LIME-based approach for explainable AI in healthcare

Authors

  • Tejashree Moharekar Shivaji University, Kolhapur, India
  • Dr. Amol B. Patil V. P. Institute of Management Studies & Research, Sangli
  • Dr. Vidyullata S. Jadhav V. P. Institute of Management Studies & Research, Sangli

Keywords:

Explainable Artificial Intelligence (XAI), AI, LIME, Diabetes Prediction, Multiclass Classification, Machine Learning

Abstract

The current study focuses on the use of LIME in diabetes prediction as a multiclass classification problem, where patients are classified into three categories: No Diabetes, Pre-Diabetes, and Diabetes. The analysis demonstrates how LIME elucidates the importance of features such as Body Mass Index (BMI), age, physical health, and lifestyle factors in determining risk categories. By providing transparent explanations for predictions, LIME enhances trust in AI systems and supports medical practitioners in interpreting model outputs. Challenges in applying LIME to multiclass healthcare datasets, such as computational overhead and explanation reliability, are also discussed. This research underscores the role of LIME in enabling ethical, interpretable, and effective AI solutions for diabetes prediction.

Downloads

Download data is not yet available.

References

[1] Alghamdi, M., & Zaki, M. (2021). Predicting diabetes using machine learning techniques: A comprehensive review. Journal of Healthcare Engineering, 2021, 1-12. https://doi.org/10.1155/2021/8534669

[2] Caruana, R., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721-1730. https://doi.org/10.1145/2783258.2788613

[3] Chen, X., Zhang, J., & Song, L. (2018). Towards interpretable machine learning for personalized treatment recommendations. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 459-468. https://doi.org/10.1145/3219819.3220083

[4] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608

[5] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144. https://doi.org/10.1145/2939672.2939778

[6] Zhou, J., Zhang, Z., & Huang, X. (2022). Interpretability in machine learning: A review and future directions for medical diagnosis. Journal of Medical Systems, 46(4), 43. https://doi.org/10.1007/s10916-022-01857-9

[7] Mrutyunjaya Panda, S. R. (2023). Explainable artificial intelligence for Healthcare applications using Random Forest Classifier with LIME and SHAP. In B. T. Seetha, Transparent, Interpretable and Explainable AI Systems. CRC Press.

[8] Saarela, M. &. (2024). Recent Applications of Explainable AI (XAI): A Systematic Literature Review. Applied Sciences. Applied Sciences.

[9] Shahab S Band, A. Y.-C.-W. (2023). Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods. Informatics in Medicine Unlocked.

[10] Tallaswapna. (2024). LIME(Local Interpretable Model-Agnostic Explanations) in XAI with an example in Python. Retrieved from Medium.

[11] Vidyullata S Jadhav, T. T. (2023). UNPACKING EXPLAINABLE AI (XAI) IN EDUCATION: A COMPREHENSIVE REVIEW AND OUTLOOK. International Research Journal of Humanities and Interdisciplinary Studies .

Downloads

Published

19-09-2025

How to Cite

Moharekar, T., Dr. Amol B. Patil, & Dr. Vidyullata S. Jadhav. (2025). A LIME-based approach for explainable AI in healthcare. The International Tax Journal, 52(5), 1732–1741. Retrieved from https://internationaltaxjournal.online/index.php/itj/article/view/186

Issue

Section

Online Access