oalogo2  

AUTHOR(S):

Wasiu Olatunde Oladapo, Ismail Olaniyi Muraina, Moses Adeolu Agoi, Solomon Onen Abam, Bashir Oyeniran Ayinde

 

TITLE

Explainable AI (XAI) Methods: Interpretability, Trust, and Applications in Critical Systems: A Systematic Literature Review

pdf PDF

ABSTRACT

The systematic literature review study examines more recent advances in Explainable Artificial Intelligence (XAI) under the umbrellas of interpretability, trust, and application in critical systems. The research is a study synthesis, which analyses the findings of 18 peer-reviewed articles published between 2020 and 2025, providing a synopsis of the XAI frameworks and methods and their domain-specific applications. The most promising XAI tools like LIME, SHAP, counterfactual explanations, and model-agnostic methods are analyzed in the different fields of various applications: healthcare, cybersecurity, finance, industrial control systems, and autonomous vehicles. The review points out the conflict between accuracy and interpretability of models, and a possible absence of a standard metric used to measure the quality of an explanation procedure. It also highlights the necessity of user and context-based explanations in assisting high-stakes environments in making decisions. Ethical aspects, humanity, and security as well as the industry-related issues on trust and safety are critically evaluated. Both in the number and in the magnitude, there has been an increased traction of XAI researches, especially as of 2024 to 2025, and the future direction of interest suggests the necessity of future research into scalable XAI techniques, evaluation venues, and the involvement of large language models to provide natural language explanations. The review helps to promote trustworthy AI by highlighting the research gaps, summarizing the trends and suggesting best practices on the deployment of explainable systems as applied to mission-critical tasks.

KEYWORDS

Explainable AI (XAI), Interpretable AI, AI explainability, Interpretability, Transparency, Explanation, Understandable, Trust, , Critical systems and Application

 

Cite this paper

Wasiu Olatunde Oladapo, Ismail Olaniyi Muraina, Moses Adeolu Agoi, Solomon Onen Abam, Bashir Oyeniran Ayinde. (2025) Explainable AI (XAI) Methods: Interpretability, Trust, and Applications in Critical Systems: A Systematic Literature Review. International Journal of Computers, 10, 303-318

 

cc.png
Copyright © 2025 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0