Explainable AI for Credit Risk Assessment: A Data-Driven Approach to Transparent Lending Decisions
DOI:
https://doi.org/10.32996/jefas.2024.6.1.11Keywords:
Explainable AI (XAI); Credit Risk Assessment; SHAP; LIME; Machine Learning; Interpretability; Lending Decisions; Financial Technology; Model Transparency; Ethical AIAbstract
In the era of data-driven decision-making, credit risk assessment plays a pivotal role in ensuring the financial stability of lending institutions. However, traditional machine learning models, while accurate, often function as "black boxes," offering limited interpretability for stakeholders. This paper presents an explainable artificial intelligence (XAI) framework designed to enhance transparency in credit risk evaluation. By integrating interpretable models such as SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and decision trees with robust ensemble methods, we assess creditworthiness using publicly available loan datasets. The proposed approach not only improves predictive accuracy but also offers clear, feature-level insights into lending decisions, fostering trust among loan officers, regulators, and applicants. This study demonstrates that incorporating explainability into AI-driven credit scoring systems bridges the gap between predictive performance and model transparency, paving the way for more ethical and accountable financial practices.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment